WorldWideScience

Sample records for failure time model

  1. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  2. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  3. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  4. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  5. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  6. A delay time model with imperfect and failure-inducing inspections

    International Nuclear Information System (INIS)

    Flage, Roger

    2014-01-01

    This paper presents an inspection-based maintenance optimisation model where the inspections are imperfect and potentially failure-inducing. The model is based on the basic delay-time model in which a system has three states: perfectly functioning, defective and failed. The system is deteriorating through these states and to reveal defective systems, inspections are performed periodically using a procedure by which the system fails with a fixed state-dependent probability; otherwise, an inspection identifies a functioning system as defective (false positive) with a fixed probability and a defective system as functioning (false negative) with a fixed probability. The system is correctively replaced upon failure or preventively replaced either at the N'th inspection time or when an inspection reveals the system as defective, whichever occurs first. Replacement durations are assumed to be negligible and costs are associated with inspections, replacements and failures. The problem is to determine the optimal inspection interval T and preventive age replacement limit N that jointly minimise the long run expected cost per unit of time. The system may also be thought of as a passive two-state system subject to random demands; the three states of the model are then functioning, undetected failed and detected failed; and to ensure the renewal property of replacement cycles the demand process generating the ‘delay time’ is then restricted to the Poisson process. The inspiration for the presented model has been passive safety critical valves as used in (offshore) oil and gas production and transportation systems. In light of this the passive system interpretation is highlighted, as well as the possibility that inspection-induced failures are associated with accidents. Two numerical examples are included, and some potential extensions of the model are indicated

  7. Time to failure of hierarchical load-transfer models of fracture

    DEFF Research Database (Denmark)

    Vázquez-Prada, M; Gómez, J B; Moreno, Y

    1999-01-01

    The time to failure, T, of dynamical models of fracture for a hierarchical load-transfer geometry is studied. Using a probabilistic strategy and juxtaposing hierarchical structures of height n, we devise an exact method to compute T, for structures of height n+1. Bounding T, for large n, we are a...... are able to deduce that the time to failure tends to a nonzero value when n tends to infinity. This numerical conclusion is deduced for both power law and exponential breakdown rules....

  8. Real-time sensor failure detection by dynamic modelling of a PWR plant

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.

    1992-06-01

    Signal validation and sensor failure detection is an important problem in real-time nuclear power plant (NPP) surveillance. Although conventional sensor redundancy, in a way, is a solution, identification of faulty sensor is necessary for further preventive actions to be taken. A comprehensive solution for the system so that any sensory reading is verified by its model based estimated counterpart, in real-time. Such a realization is accomplished by means of dynamic system's states estimation methodology using Kalman filter modelling technique. The method is investigated by means of real-time data of the steam generator of Borssele nuclear power plant and the method has proved to be satisfactory for real-time sensor failure detection as well as model validation verification. (author). 5 refs.; 6 figs.; 1 tab

  9. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  10. Development of a subway operation incident delay model using accelerated failure time approaches.

    Science.gov (United States)

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. rpsftm: An R package for rank preserving structural failure time models

    OpenAIRE

    Allison, A.; White, I. R.; Bond, S.

    2017-01-01

    Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due ...

  12. The failure of earthquake failure models

    Science.gov (United States)

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  13. Failure analysis of real-time systems

    International Nuclear Information System (INIS)

    Jalashgar, A.; Stoelen, K.

    1998-01-01

    This paper highlights essential aspects of real-time software systems that are strongly related to the failures and their course of propagation. The significant influence of means-oriented and goal-oriented system views in the description, understanding and analysing of those aspects is elaborated. The importance of performing failure analysis prior to reliability analysis of real-time systems is equally addressed. Problems of software reliability growth models taking the properties of such systems into account are discussed. Finally, the paper presents a preliminary study of a goal-oriented approach to model the static and dynamic characteristics of real-time systems, so that the corresponding analysis can be based on a more descriptive and informative picture of failures, their effects and the possibility of their occurrence. (author)

  14. Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre

    Science.gov (United States)

    Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip

    2013-04-01

    Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in

  15. Models and analysis for multivariate failure time data

    Science.gov (United States)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the

  16. A bivariate model for analyzing recurrent multi-type automobile failures

    Science.gov (United States)

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by

  17. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  18. Semiparametric Bayesian analysis of accelerated failure time models with cluster structures.

    Science.gov (United States)

    Li, Zhaonan; Xu, Xinyi; Shen, Junshan

    2017-11-10

    In this paper, we develop a Bayesian semiparametric accelerated failure time model for survival data with cluster structures. Our model allows distributional heterogeneity across clusters and accommodates their relationships through a density ratio approach. Moreover, a nonparametric mixture of Dirichlet processes prior is placed on the baseline distribution to yield full distributional flexibility. We illustrate through simulations that our model can greatly improve estimation accuracy by effectively pooling information from multiple clusters, while taking into account the heterogeneity in their random error distributions. We also demonstrate the implementation of our method using analysis of Mayo Clinic Trial in Primary Biliary Cirrhosis. Copyright © 2017 John Wiley & Sons, Ltd.

  19. rpsftm: An R Package for Rank Preserving Structural Failure Time Models.

    Science.gov (United States)

    Allison, Annabel; White, Ian R; Bond, Simon

    2017-12-04

    Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due to Robins and Tsiatis (1991) and has been developed by White et al. (1997, 1999). The method is randomisation based and uses only the randomised treatment group, observed event times, and treatment history in order to estimate a causal treatment effect. The treatment effect, ψ , is estimated by balancing counter-factual event times (that would be observed if no treatment were received) between treatment groups. G-estimation is used to find the value of ψ such that a test statistic Z ( ψ ) = 0. This is usually the test statistic used in the intention-to-treat analysis, for example, the log rank test statistic. We present an R package that implements the method of rpsftm.

  20. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  1. Real Time Fire Reconnaissance Satellite Monitoring System Failure Model

    Science.gov (United States)

    Nino Prieto, Omar Ariosto; Colmenares Guillen, Luis Enrique

    2013-09-01

    In this paper the Real Time Fire Reconnaissance Satellite Monitoring System is presented. This architecture is a legacy of the Detection System for Real-Time Physical Variables which is undergoing a patent process in Mexico. The methodologies for this design are the Structured Analysis for Real Time (SA- RT) [8], and the software is carried out by LACATRE (Langage d'aide à la Conception d'Application multitâche Temps Réel) [9,10] Real Time formal language. The system failures model is analyzed and the proposal is based on the formal language for the design of critical systems and Risk Assessment; AltaRica. This formal architecture uses satellites as input sensors and it was adapted from the original model which is a design pattern for physical variation detection in Real Time. The original design, whose task is to monitor events such as natural disasters and health related applications, or actual sickness monitoring and prevention, as the Real Time Diabetes Monitoring System, among others. Some related work has been presented on the Mexican Space Agency (AEM) Creation and Consultation Forums (2010-2011), and throughout the International Mexican Aerospace Science and Technology Society (SOMECYTA) international congress held in San Luis Potosí, México (2012). This Architecture will allow a Real Time Fire Satellite Monitoring, which will reduce the damage and danger caused by fires which consumes the forests and tropical forests of Mexico. This new proposal, permits having a new system that impacts on disaster prevention, by combining national and international technologies and cooperation for the benefit of humankind.

  2. Semiparametric regression analysis of failure time data with dependent interval censoring.

    Science.gov (United States)

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. The Influence of Temperature on Time-Dependent Deformation and Failure in Granite: A Mesoscale Modeling Approach

    Science.gov (United States)

    Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick

    2017-09-01

    An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the

  4. Resolving epidemic network failures through differentiated repair times

    DEFF Research Database (Denmark)

    Fagertun, Anna Manolova; Ruepp, Sarah Renée; Manzano, Marc

    2015-01-01

    In this study, the authors investigate epidemic failure spreading in large-scale transport networks under generalisedmulti-protocol label switching control plane. By evaluating the effect of the epidemic failure spreading on the network,they design several strategies for cost-effective network pe...... assigninglower repair times among the network nodes. They believe that the event-driven simulation model can be highly beneficialfor network providers, since it could be used during the network planning process for facilitating cost-effective networksurvivability design.......In this study, the authors investigate epidemic failure spreading in large-scale transport networks under generalisedmulti-protocol label switching control plane. By evaluating the effect of the epidemic failure spreading on the network,they design several strategies for cost-effective network...... performance improvement via differentiated repair times. First, theyidentify the most vulnerable and the most strategic nodes in the network. Then, via extensive event-driven simulations theyshow that strategic placement of resources for improved failure recovery has better performance than randomly...

  5. Development of container failure models

    International Nuclear Information System (INIS)

    Garisto, N.C.

    1990-01-01

    In order to produce a complete performance assessment for a Canadian waste vault some prediction of container failure times is required. Data are limited; however, the effects of various possible failure scenarios on the rest of the vault model can be tested. For titanium and copper, the two materials considered in the Canadian program, data are available on the frequency of failures due to manufacturing defects; there is also an estimate on the expected size of such defects. It can be shown that the consequences of such small defects in terms of the dose to humans are acceptable. It is not clear, from a modelling point of view, whether titanium or copper are preferable

  6. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  7. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  8. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  9. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Universal failure model for multi-unit systems with shared functionality

    International Nuclear Information System (INIS)

    Volovoi, Vitali

    2013-01-01

    A Universal Failure Model (UFM) is proposed for complex systems that rely on a large number of entities for performing a common function. Economy of scale or other considerations may dictate the need to pool resources for common purpose, but the resulting strong coupling precludes the grouping of those components into modules. Existing system-level failure models rely on modularity for reducing modeling complexity, so the UFM will fill an important gap in constructing efficient system-level models. Conceptually, the UFM resembles cellular automata (CA) infused with realistic failure mechanisms. Components’ behavior is determined based on the balance between their strength (capacity) and their load (demand) share. If the load exceeds the components’ capacity, the component fails and its load share is distributed among its neighbors (possibly with a time delay and load losses). The strength of components can degrade with time if the load exceeds an elastic threshold. The global load (demand) carried by the system can vary over time, with the peak values providing shocks to the system (e.g., wind loads in civil structures, electricity demand, stressful activities to human bodies, or drought in an ecosystem). Unlike the models traditionally studied by CA, the focus of the presented model is on the system reliability, and specifically on the study of time-to-failure distributions, rather than steady-state patterns and average time-to-failure characteristics. In this context, the relationships between the types of failure distributions and the parameters of the failure model are discussed

  11. Characterization and modeling of SET/RESET cycling induced read-disturb failure time degradation in a resistive switching memory

    Science.gov (United States)

    Su, Po-Cheng; Hsu, Chun-Chi; Du, Sin-I.; Wang, Tahui

    2017-12-01

    Read operation induced disturbance in SET-state in a tungsten oxide resistive switching memory is investigated. We observe that the reduction of oxygen vacancy density during read-disturb follows power-law dependence on cumulative read-disturb time. Our study shows that the SET-state read-disturb immunity progressively degrades by orders of magnitude as SET/RESET cycle number increases. To explore the cause of the read-disturb degradation, we perform a constant voltage stress to emulate high-field stress effects in SET/RESET cycling. We find that the read-disturb failure time degradation is attributed to high-field stress-generated oxide traps. Since the stress-generated traps may substitute for some of oxygen vacancies in forming conductive percolation paths in a switching dielectric, a stressed cell has a reduced oxygen vacancy density in SET-state, which in turn results in a shorter read-disturb failure time. We develop an analytical read-disturb degradation model including both cycling induced oxide trap creation and read-disturb induced oxygen vacancy reduction. Our model can well reproduce the measured read-disturb failure time degradation in a cycled cell without using fitting parameters.

  12. Accelerated failure time regression for backward recurrence times and current durations

    DEFF Research Database (Denmark)

    Keiding, N; Fine, J P; Hansen, O H

    2011-01-01

    Backward recurrence times in stationary renewal processes and current durations in dynamic populations observed at a cross-section may yield estimates of underlying interarrival times or survival distributions under suitable stationarity assumptions. Regression models have been proposed for these......Backward recurrence times in stationary renewal processes and current durations in dynamic populations observed at a cross-section may yield estimates of underlying interarrival times or survival distributions under suitable stationarity assumptions. Regression models have been proposed...... for these situations, but accelerated failure time models have the particularly attractive feature that they are preserved when going from the backward recurrence times to the underlying survival distribution of interest. This simple fact has recently been noticed in a sociological context and is here illustrated...... by a study of current duration of time to pregnancy...

  13. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    Science.gov (United States)

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  14. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    Science.gov (United States)

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  15. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Directory of Open Access Journals (Sweden)

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  16. On rate-state and Coulomb failure models

    Science.gov (United States)

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified

  17. Uncertainties in container failure time predictions

    International Nuclear Information System (INIS)

    Williford, R.E.

    1990-01-01

    Stochastic variations in the local chemical environment of a geologic waste repository can cause corresponding variations in container corrosion rates and failure times, and thus in radionuclide release rates. This paper addresses how well the future variations in repository chemistries must be known in order to predict container failure times that are bounded by a finite time period within the repository lifetime. Preliminary results indicate that a 5000 year scatter in predicted container failure times requires that repository chemistries be known to within ±10% over the repository lifetime. These are small uncertainties compared to current estimates. 9 refs., 3 figs

  18. On a Stochastic Failure Model under Random Shocks

    Science.gov (United States)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  19. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  20. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  1. Damage-Based Time-Dependent Modeling of Paraglacial to Postglacial Progressive Failure of Large Rock Slopes

    Science.gov (United States)

    Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.

    2018-01-01

    Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.

  2. Ductile failure modeling

    DEFF Research Database (Denmark)

    Benzerga, Ahmed Amine; Leblond, Jean Baptiste; Needleman, Alan

    2016-01-01

    Ductile fracture of structural metals occurs mainly by the nucleation, growth and coalescence of voids. Here an overview of continuum models for this type of failure is given. The most widely used current framework is described and its limitations discussed. Much work has focused on extending void...... growth models to account for non-spherical initial void shapes and for shape changes during growth. This includes cases of very low stress triaxiality, where the voids can close up to micro-cracks during the failure process. The void growth models have also been extended to consider the effect of plastic...... anisotropy, or the influence of nonlocal effects that bring a material size scale into the models. Often the voids are not present in the material from the beginning, and realistic nucleation models are important. The final failure process by coalescence of neighboring voids is an issue that has been given...

  3. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  4. Reliability modelling for wear out failure period of a single unit system

    OpenAIRE

    Arekar, Kirti; Ailawadi, Satish; Jain, Rinku

    2012-01-01

    The present paper deals with two time-shifted density models for wear out failure period of a single unit system. The study, considered the time-shifted Gamma and Normal distributions. Wear out failures occur as a result of deterioration processes or mechanical wear and its probability of occurrence increases with time. A failure rate as a function of time deceases in an early failure period and it increases in wear out period. Failure rates for time shifted distributions and expression for m...

  5. Ductile shear failure or plug failure of spot welds modelled by modified Gurson model

    DEFF Research Database (Denmark)

    Nielsen, Kim Lau; Tvergaard, Viggo

    2010-01-01

    For resistance spot welded shear-lab specimens, interfacial failure under ductile shearing or ductile plug failure are analyzed numerically, using a shear modified Gurson model. The interfacial shear failure occurs under very low stress triaxiality, where the original Gurson model would predict...

  6. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  7. Semiparametric accelerated failure time cure rate mixture models with competing risks.

    Science.gov (United States)

    Choi, Sangbum; Zhu, Liang; Huang, Xuelin

    2018-01-15

    Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non-ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause-specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause-conditional survival function that are combined through a multinomial logistic model within the cure-mixture modeling framework. The cure-mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel-based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel-smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Predicting kidney graft failure using time-dependent renal function covariates

    NARCIS (Netherlands)

    de Bruijne, Mattheus H. J.; Sijpkens, Yvo W. J.; Paul, Leendert C.; Westendorp, Rudi G. J.; van Houwelingen, Hans C.; Zwinderman, Aeilko H.

    2003-01-01

    Chronic rejection and recurrent disease are the major causes of late graft failure in renal transplantation. To assess outcome, most researchers use Cox proportional hazard analysis with time-fixed covariates. We developed a model adding time-dependent renal function covariates to improve the

  9. Modelling the failure risk for water supply networks with interval-censored data

    International Nuclear Information System (INIS)

    García-Mora, B.; Debón, A.; Santamaría, C.; Carrión, A.

    2015-01-01

    In reliability, sometimes some failures are not observed at the exact moment of the occurrence. In that case it can be more convenient to approximate them by a time interval. In this study, we have used a generalized non-linear model developed for interval-censored data to treat the life time of a pipe from its time of installation until its failure. The aim of this analysis was to identify those network characteristics that may affect the risk of failure and we make an exhaustive validation of this analysis. The results indicated that certain characteristics of the network negatively affected the risk of failure of the pipe: an increase in the length and pressure of the pipes, a small diameter, some materials used in the manufacture of pipes and the traffic on the street where the pipes are located. Once the model has been correctly fitted to our data, we also provided simple tables that will allow companies to easily calculate the pipe's probability of failure in a future. - Highlights: • We model the first failure time in a water supply company from Spain. • We fit arbitrarily interval-censored data with a generalized non-linear model. • The results are validated. We provide simple tables to easily calculate probabilities of no failure at different times.

  10. A real-time expert system for nuclear power plant failure diagnosis and operational guide

    International Nuclear Information System (INIS)

    Naito, N.; Sakuma, A.; Shigeno, K.; Mori, N.

    1987-01-01

    A real-time expert system (DIAREX) has been developed to diagnose plant failure and to offer a corrective operational guide for boiling water reactor (BWR) power plants. The failure diagnosis model used in DIAREX was systematically developed, based mainly on deep knowledge, to cover heuristics. Complex paradigms for knowledge representation were adopted, i.e., the process representation language and the failure propagation tree. The system is composed of a knowledge base, knowledge base editor, preprocessor, diagnosis processor, and display processor. The DIAREX simulation test has been carried out for many transient scenarios, including multiple failures, using a real-time full-scope simulator modeled after the 1100-MW(electric) BWR power plant. Test results showed that DIAREX was capable of diagnosing a plant failure quickly and of providing a corrective operational guide with a response time fast enough to offer valuable information to plant operators

  11. A Markov Model for Commen-Cause Failures

    DEFF Research Database (Denmark)

    Platz, Ole

    1984-01-01

    A continuous time four-state Markov chain is shown to cover several of the models that have been used for describing dependencies between failures of components in redundant systems. Among these are the models derived by Marshall and Olkin and by Freund and models for one-out-of-three and two...

  12. Hydraulic mechanism and time-dependent characteristics of loose gully deposits failure induced by rainfall

    Directory of Open Access Journals (Sweden)

    Yong Wu

    2015-12-01

    Full Text Available Failure of loose gully deposits under the effect of rainfall contributes to the potential risk of debris flow. In the past decades, researches on hydraulic mechanism and time-dependent characteristics of loose deposits failure are frequently reported, however adequate measures for reducing debris flow are not available practically. In this context, a time-dependent model was established to determine the changes of water table of loose deposits using hydraulic and topographic theories. In addition, the variation in water table with elapsed time was analyzed. The formulas for calculating hydrodynamic and hydrostatic pressures on each strip and block unit of deposit were proposed, and the slope stability and failure risk of the loose deposits were assessed based on the time-dependent hydraulic characteristics of established model. Finally, the failure mechanism of deposits based on infinite slope theory was illustrated, with an example, to calculate sliding force, anti-sliding force and residual sliding force applied to each slice. The results indicate that failure of gully deposits under the effect of rainfall is the result of continuously increasing hydraulic pressure and water table. The time-dependent characteristics of loose deposit failure are determined by the factors of hydraulic properties, drainage area of interest, rainfall pattern, rainfall duration and intensity.

  13. Total time on test processes and applications to failure data analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Campo, R.

    1975-01-01

    This paper describes a new method for analyzing data. The method applies to non-negative observations such as times to failure of devices and survival times of biological organisms and involves a plot of the data. These plots are useful in choosing a probabilistic model to represent the failure behavior of the data. They also furnish information about the failure rate function and aid in its estimation. An important feature of these data plots is that incomplete data can be analyzed. The underlying random variables are, however, assumed to be independent and identically distributed. The plots have a theoretical basis, and converge to a transform of the underlying probability distribution as the sample size increases

  14. Bounds for the time to failure of hierarchical systems of fracture

    DEFF Research Database (Denmark)

    Gómez, J.B.; Vázquez-Prada, M.; Moreno, Y.

    1999-01-01

    an exact algebraic iterative method to compute the successive time intervals for individual breaking in systems of height n in terms of the information calculated in the previous height n - 1. As a byproduct of this method, rigorous lower and higher bounds for the time to failure of very large systems......For years limited Monte Carlo simulations have led to the suspicion that the time to failure of hierarchically organized load-transfer models of fracture is nonzero for sets of infinite size. This fact could have profound significance in engineering practice and also in geophysics. Here, we develop...

  15. An analytical model for interactive failures

    International Nuclear Information System (INIS)

    Sun Yong; Ma Lin; Mathew, Joseph; Zhang Sheng

    2006-01-01

    In some systems, failures of certain components can interact with each other, and accelerate the failure rates of these components. These failures are defined as interactive failure. Interactive failure is a prevalent cause of failure associated with complex systems, particularly in mechanical systems. The failure risk of an asset will be underestimated if the interactive effect is ignored. When failure risk is assessed, interactive failures of an asset need to be considered. However, the literature is silent on previous research work in this field. This paper introduces the concepts of interactive failure, develops an analytical model to analyse this type of failure quantitatively, and verifies the model using case studies and experiments

  16. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  17. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    International Nuclear Information System (INIS)

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed approach without considering the shared signals and the various complex logic operation. Meanwhile, CCF can also arise in the computing process.

  18. A Prognostic Model for Estimating the Time to Virologic Failure in HIV-1 Infected Patients Undergoing a New Combination Antiretroviral Therapy Regimen

    Directory of Open Access Journals (Sweden)

    Micheli Valeria

    2011-06-01

    Full Text Available Abstract Background HIV-1 genotypic susceptibility scores (GSSs were proven to be significant prognostic factors of fixed time-point virologic outcomes after combination antiretroviral therapy (cART switch/initiation. However, their relative-hazard for the time to virologic failure has not been thoroughly investigated, and an expert system that is able to predict how long a new cART regimen will remain effective has never been designed. Methods We analyzed patients of the Italian ARCA cohort starting a new cART from 1999 onwards either after virologic failure or as treatment-naïve. The time to virologic failure was the endpoint, from the 90th day after treatment start, defined as the first HIV-1 RNA > 400 copies/ml, censoring at last available HIV-1 RNA before treatment discontinuation. We assessed the relative hazard/importance of GSSs according to distinct interpretation systems (Rega, ANRS and HIVdb and other covariates by means of Cox regression and random survival forests (RSF. Prediction models were validated via the bootstrap and c-index measure. Results The dataset included 2337 regimens from 2182 patients, of which 733 were previously treatment-naïve. We observed 1067 virologic failures over 2820 persons-years. Multivariable analysis revealed that low GSSs of cART were independently associated with the hazard of a virologic failure, along with several other covariates. Evaluation of predictive performance yielded a modest ability of the Cox regression to predict the virologic endpoint (c-index≈0.70, while RSF showed a better performance (c-index≈0.73, p Conclusions GSSs of cART and several other covariates were investigated using linear and non-linear survival analysis. RSF models are a promising approach for the development of a reliable system that predicts time to virologic failure better than Cox regression. Such models might represent a significant improvement over the current methods for monitoring and optimization of cART.

  19. [Hazard function and life table: an introduction to the failure time analysis].

    Science.gov (United States)

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  20. Modeling Epidemic Network Failures

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Fagertun, Anna Manolova

    2013-01-01

    This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...... the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used...... to evaluate multiple epidemic scenarios in various network types....

  1. Experimental models of hepatotoxicity related to acute liver failure

    Energy Technology Data Exchange (ETDEWEB)

    Maes, Michaël [Department of In Vitro Toxicology and Dermato-Cosmetology, Vrije Universiteit Brussel, Brussels (Belgium); Vinken, Mathieu, E-mail: mvinken@vub.ac.be [Department of In Vitro Toxicology and Dermato-Cosmetology, Vrije Universiteit Brussel, Brussels (Belgium); Jaeschke, Hartmut [Department of Pharmacology, Toxicology and Therapeutics, University of Kansas Medical Center, Kansas City (United States)

    2016-01-01

    Acute liver failure can be the consequence of various etiologies, with most cases arising from drug-induced hepatotoxicity in Western countries. Despite advances in this field, the management of acute liver failure continues to be one of the most challenging problems in clinical medicine. The availability of adequate experimental models is of crucial importance to provide a better understanding of this condition and to allow identification of novel drug targets, testing the efficacy of new therapeutic interventions and acting as models for assessing mechanisms of toxicity. Experimental models of hepatotoxicity related to acute liver failure rely on surgical procedures, chemical exposure or viral infection. Each of these models has a number of strengths and weaknesses. This paper specifically reviews commonly used chemical in vivo and in vitro models of hepatotoxicity associated with acute liver failure. - Highlights: • The murine APAP model is very close to what is observed in patients. • The Gal/ET model is useful to study TNFα-mediated apoptotic signaling mechanisms. • Fas receptor activation is an effective model of apoptosis and secondary necrosis. • The ConA model is a relevant model of auto-immune hepatitis and viral hepatitis. • Multiple time point evaluation needed in experimental models of acute liver injury.

  2. Prediction of the time-dependent failure rate for normally operating components taking into account the operational history

    International Nuclear Information System (INIS)

    Vrbanic, I.; Simic, Z.; Sljivac, D.

    2008-01-01

    The prediction of the time-dependent failure rate has been studied, taking into account the operational history of a component used in applications such as system modeling in a probabilistic safety analysis in order to evaluate the impact of equipment aging and maintenance strategies on the risk measures considered. We have selected a time-dependent model for the failure rate which is based on the Weibull distribution and the principles of proportional age reduction by equipment overhauls. Estimation of the parameters that determine the failure rate is considered, including the definition of the operational history model and likelihood function for the Bayesian analysis of parameters for normally operating repairable components. The operational history is provided as a time axis with defined times of overhauls and failures. An example for demonstration is described with prediction of the future behavior for seven different operational histories. (orig.)

  3. Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods

    Science.gov (United States)

    Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed

    2018-04-01

    This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.

  4. Probability of Loss of Assured Safety in Systems with Multiple Time-Dependent Failure Modes: Incorporation of Delayed Link Failure in the Presence of Aleatory Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximation and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.

  5. Generic Sensor Failure Modeling for Cooperative Systems

    Science.gov (United States)

    Jäger, Georg; Zug, Sebastian

    2018-01-01

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435

  6. A quasi-independence model to estimate failure rates

    International Nuclear Information System (INIS)

    Colombo, A.G.

    1988-01-01

    The use of a quasi-independence model to estimate failure rates is investigated. Gate valves of nuclear plants are considered, and two qualitative covariates are taken into account: plant location and reactor system. Independence between the two covariates and an exponential failure model are assumed. The failure rate of the components of a given system and plant is assumed to be a constant, but it may vary from one system to another and from one plant to another. This leads to the analysis of a contingency table. A particular feature of the model is the different operating time of the components in the various cells which can also be equal to zero. The concept of independence of the covariates is then replaced by that of quasi-independence. The latter definition, however, is used in a broader sense than usual. Suitable statistical tests are discussed and a numerical example illustrates the use of the method. (author)

  7. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    Science.gov (United States)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  8. The Use of Conditional Probability Integral Transformation Method for Testing Accelerated Failure Time Models

    Directory of Open Access Journals (Sweden)

    Abdalla Ahmed Abdel-Ghaly

    2016-06-01

    Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.

  9. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  10. Failure probabilistic model of CNC lathes

    International Nuclear Information System (INIS)

    Wang Yiqiang; Jia Yazhou; Yu Junyi; Zheng Yuhua; Yi Shangfeng

    1999-01-01

    A field failure analysis of computerized numerical control (CNC) lathes is described. Field failure data was collected over a period of two years on approximately 80 CNC lathes. A coding system to code failure data was devised and a failure analysis data bank of CNC lathes was established. The failure position and subsystem, failure mode and cause were analyzed to indicate the weak subsystem of a CNC lathe. Also, failure probabilistic model of CNC lathes was analyzed by fuzzy multicriteria comprehensive evaluation

  11. Prediction of dynamic expected time to system failure

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Lee, Chong Chul [Korea Nuclear Fuel Co., Ltd., Taejon (Korea, Republic of)

    1998-12-31

    The mean time to failure (MTTF) expressing the mean value of the system life is a measure of system effectiveness. To estimate the remaining life of component and/or system, the dynamic mean time to failure concept is suggested. It is the time-dependent property depending on the status of components. The Kalman filter is used to estimate the reliability of components using the on-line information (directly measured sensor output or device-specific diagnostics in the intelligent sensor) in form of the numerical value (state factor). This factor considers the persistency of the fault condition and confidence level in measurement. If there is a complex system with many components, each calculated reliability`s of components are combined, which results in the dynamic MTTF of system. The illustrative examples are discussed. The results show that the dynamic MTTF can well express the component and system failure behaviour whether any kinds of failure are occurred or not. 9 refs., 6 figs. (Author)

  12. Prediction of dynamic expected time to system failure

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Lee, Chong Chul [Korea Nuclear Fuel Co., Ltd., Taejon (Korea, Republic of)

    1997-12-31

    The mean time to failure (MTTF) expressing the mean value of the system life is a measure of system effectiveness. To estimate the remaining life of component and/or system, the dynamic mean time to failure concept is suggested. It is the time-dependent property depending on the status of components. The Kalman filter is used to estimate the reliability of components using the on-line information (directly measured sensor output or device-specific diagnostics in the intelligent sensor) in form of the numerical value (state factor). This factor considers the persistency of the fault condition and confidence level in measurement. If there is a complex system with many components, each calculated reliability`s of components are combined, which results in the dynamic MTTF of system. The illustrative examples are discussed. The results show that the dynamic MTTF can well express the component and system failure behaviour whether any kinds of failure are occurred or not. 9 refs., 6 figs. (Author)

  13. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  14. Analysis of terminated TOP accidents in the FTR using the Los Alamos failure model

    International Nuclear Information System (INIS)

    Mast, P.K.; Scott, J.H.

    1978-01-01

    A new fuel pin failure model (the Los Alamos Failure Model), based on a linear life fraction rule failure criterion, has been developed and is reported herein. Excellent agreement between calculated and observed failure time and location has been obtained for a number of TOP TREAT tests. Because of the nature of the failure criterion used, the code has also been used to investigate the extent of cladding damage incurred in terminated as well as unterminated TOP transients in the FTR

  15. Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression

    Directory of Open Access Journals (Sweden)

    Yingchong Wang

    2015-01-01

    Full Text Available Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating, secondary (steady state creep regime, and tertiary creep (accelerating creep stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0 for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage.

  16. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current US innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery

  17. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current U.S. innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery. (orig.)

  18. Review of constitutive models and failure criteria for concrete

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jeong Moon; Choun, Young Sun [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    The general behavior, constitutive models, and failure criteria of concrete are reviewed. The current constitutive models for concrete cannot satisfy all of mechanical behavior of concrete. Among several constitutive models, damage models are recommended to describe properly the structural behavior of concrete containment buildings, because failure modes and post-failure behavior are important in containment buildings. A constitutive model which can describe the concrete behavior in tension is required because the containment buildings will reach failure state due to ultimate internal pressure. Therefore, a thorough study on the behavior and models under tension stress state in concrete and reinforced concrete has to be performed. There are two types of failure criteria in containment buildings: structural failure criteria and leakage failure criteria. For reinforced or prestressed concrete containment buildings, concrete cracking does not mean the structural failure of containment building because the reinforcement or post-tensioning system is able to resist tensile stress up to yield stress. Therefore leakage failure criteria will be prior to structural failure criteria, and a strain failure criterion for concrete has to be established. 120 refs., 59 figs., 1 tabs. (Author)

  19. Degradation failure model of self-healing metallized film pulse capacitor

    International Nuclear Information System (INIS)

    Sun Quan; Zhong Zheng; Zhou Jinglun; Zhao Jianyin; Wei Xiaofeng; Guo Liangfu; Zhou Pizhang; Li Yizheng; Chen Dehuai

    2004-01-01

    The high energy density self-healing metallized film pulse capacitor has been applied to all kinds of laser facilities for their power conditioning systems, whose reliability and expense are straightforwardly affected by the reliability level of the capacitors. Based on the related research in literature, this paper analyses the degradation mechanism of the capacitor, and presents a new degradation failure model--the Gauss-Poisson model. The Gauss-Poisson model divides degradation of capacitor into naturalness degradation and outburst one. Compared with traditional Weibull failure model, the new model is more precise in evaluating the lifetime of the capacitor, and the life tests for this model are simple in design, and lower in the cost of time or expense. The Gauss-Poisson model will be a fine and widely used degradation disable model. (author)

  20. A physical probabilistic model to predict failure rates in buried PVC pipelines

    International Nuclear Information System (INIS)

    Davis, P.; Burn, S.; Moglia, M.; Gould, S.

    2007-01-01

    For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical

  1. Comparing risk of failure models in water supply networks using ROC curves

    International Nuclear Information System (INIS)

    Debon, A.; Carrion, A.; Cabrera, E.; Solano, H.

    2010-01-01

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  2. Comparing risk of failure models in water supply networks using ROC curves

    Energy Technology Data Exchange (ETDEWEB)

    Debon, A., E-mail: andeau@eio.upv.e [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Carrion, A. [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Cabrera, E. [Dpto. De Ingenieria Hidraulica Y Medio Ambiente, Instituto Tecnologico del Agua, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Solano, H. [Universidad Diego Portales, Santiago (Chile)

    2010-01-15

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  3. Time shift in slope failure prediction between unimodal and bimodal modeling approaches

    Science.gov (United States)

    Ciervo, Fabio; Casini, Francesca; Nicolina Papa, Maria; Medina, Vicente

    2016-04-01

    Together with the need to use more appropriate mathematical expressions for describing hydro-mechanical soil processes, a challenge issue relates to the need of considering the effects induced by terrain heterogeneities on the physical mechanisms, taking into account the implications of the heterogeneities in affecting time-dependent hydro-mechanical variables, would improve the prediction capacities of models, such as the ones used in early warning systems. The presence of the heterogeneities in partially-saturated slopes results in irregular propagation of the moisture and suction front. To mathematically represent the "dual-implication" generally induced by the heterogeneities in describing the hydraulic terrain behavior, several bimodal hydraulic models have been presented in literature and replaced the conventional sigmoidal/unimodal functions; this presupposes that the scale of the macrostructure is comparable with the local scale (Darcy scale), thus the Richards' model can be assumed adequate to mathematically reproduce the processes. The purpose of this work is to focus on the differences in simulating flow infiltration processes and slope stability conditions originated from preliminary choices of hydraulic models and contextually between different approaches to evaluate the factor of safety (FoS). In particular, the results of two approaches are compared. The first one includes the conventional expression of the FoS under saturated conditions and the widespread used hydraulic model of van Genuchten-Mualem. The second approach includes a generalized FoS equation for infinite-slope model under variably saturated soil conditions (Lu and Godt, 2008) and the bimodal Romano et al.'s (2011) functions to describe the hydraulic response. The extension of the above mentioned approach to the bimodal context is based on an analytical method to assess the effects of the hydraulic properties on soil shear developed integrating a bimodal lognormal hydraulic function

  4. Clinical findings and survival time in dogs with advanced heart failure.

    Science.gov (United States)

    Beaumier, Amelie; Rush, John E; Yang, Vicky K; Freeman, Lisa M

    2018-04-10

    Dogs with advanced heart failure are a clinical challenge for veterinarians but there are no studies reporting clinical features and outcome of this population. To describe clinical findings and outcome of dogs with advanced heart failure caused by degenerative mitral valve disease (DMVD). Fifty-four dogs with advanced heart failure because of DMVD. For study purposes, advanced heart failure was defined as recurrence of congestive heart failure signs despite receiving the initially prescribed dose of pimobendan, angiotensin-converting-enzyme inhibitor (ACEI), and furosemide >4 mg/kg/day. Data were collected for the time of diagnosis of Stage C heart failure and time of diagnosis of advanced heart failure. Date of death was recorded. At the diagnosis of advanced heart failure, doses of pimobendan (n = 30), furosemide (n = 28), ACEI (n = 13), and spironolactone (n = 4) were increased, with ≥1 new medications added in most dogs. After initial diagnosis of advanced heart failure, 38 (70%) dogs had additional medications adjustments (median = 2 [range, 0-27]), with the final total medication number ranging from 2-10 (median = 5). Median survival time after diagnosis of advanced heart failure was 281 days (range, 3-885 days). Dogs receiving a furosemide dose >6.70 mg/kg/day had significantly longer median survival times (402 days [range, 3-885 days] versus 129 days [range 9-853 days]; P = .017). Dogs with advanced heart failure can have relatively long survival times. Higher furosemide dose and non-hospitalization were associated with longer survival. Copyright © 2018 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  5. Modeling the failure data of a repairable equipment with bathtub type failure intensity

    International Nuclear Information System (INIS)

    Pulcini, G.

    2001-01-01

    The paper deals with the reliability modeling of the failure process of large and complex repairable equipment whose failure intensity shows a bathtub type non-monotonic behavior. A non-homogeneous Poisson process arising from the superposition of two power law processes is proposed, and the characteristics and mathematical details of the proposed model are illustrated. A graphical approach is also presented, which allows to determine whether the proposed model can adequately describe a given failure data. A graphical method for obtaining crude but easy estimates of the model parameters is then illustrated, as well as more accurate estimates based on the maximum likelihood method are provided. Finally, two numerical applications are given to illustrate the proposed model and the estimation procedures

  6. Discrete competing risk model with application to modeling bus-motor failure data

    International Nuclear Information System (INIS)

    Jiang, R.

    2010-01-01

    Failure data are often modeled using continuous distributions. However, a discrete distribution can be appropriate for modeling interval or grouped data. When failure data come from a complex system, a simple discrete model can be inappropriate for modeling such data. This paper presents two types of discrete distributions. One is formed by exponentiating an underlying distribution, and the other is a two-fold competing risk model. The paper focuses on two special distributions: (a) exponentiated Poisson distribution and (b) competing risk model involving a geometric distribution and an exponentiated Poisson distribution. The competing risk model has a decreasing-followed-by-unimodal mass function and a bathtub-shaped failure rate. Five classical data sets on bus-motor failures can be simultaneously and appropriately fitted by a general 5-parameter competing risk model with the parameters being functions of the number of successive failures. The lifetime and aging characteristics of the fitted distribution are analyzed.

  7. MATHEMATICAL MODEL OF WEAR CHARACTER FAILURE IN AIRCRAFT OPERATION

    OpenAIRE

    Радько, Олег Віталійович; Молдован, Володимир Дмитрович

    2016-01-01

    In this paper the mathematical model of failures associated with wear during aircraft exploitationis developed. Тhe calculations of the distribution function, distribution density and failurerate gamma distribution at low coefficients of variation and the relatively low value of averagewear rate for the current time, which varies quite widely. The results coincide well with thephysical concepts and can be used to build different models of aircraft. Gamma distribution is apretty good model for...

  8. Modeling and real time simulation of an HVDC inverter feeding a weak AC system based on commutation failure study.

    Science.gov (United States)

    Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad

    2018-06-01

    This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. The comparison of proportional hazards and accelerated failure time models in analyzing the first birth interval survival data

    Science.gov (United States)

    Faruk, Alfensi

    2018-03-01

    Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.

  10. Strong exploration of a cast iron pipe failure model

    International Nuclear Information System (INIS)

    Moglia, M.; Davis, P.; Burn, S.

    2008-01-01

    A physical probabilistic failure model for buried cast iron pipes is described, which is based on the fracture mechanics of the pipe failure process. Such a model is useful in the asset management of buried pipelines. The model is then applied within a Monte-Carlo simulation framework after adding stochasticity to input variables. Historical failure rates are calculated based on a database of 81,595 pipes and their recorded failures, and model parameters are chosen to provide the best fit between historical and predicted failure rates. This provides an estimated corrosion rate distribution, which agrees well with experimental results. The first model design was chosen in a deliberate simplistic fashion in order to allow for further strong exploration of model assumptions. Therefore, first runs of the initial model resulted in a poor quantitative and qualitative fit in regards to failure rates. However, by exploring natural additional assumptions such as relating to stochastic loads, a number of assumptions were chosen which improved the model to a stage where an acceptable fit was achieved. The model bridges the gap between micro- and macro-level, and this is the novelty in the approach. In this model, data can be used both from the macro-level in terms of failure rates, as well as from the micro-level such as in terms of corrosion rates

  11. A RAT MODEL OF HEART FAILURE INDUCED BY ISOPROTERENOL AND A HIGH SALT DIET

    Science.gov (United States)

    Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4wk) isoproterenol (ISO) infusion in Spontaneously Hypertensive Heart Failure (SHHF) rats caused cardiac injury with minimal hypertrophy. O...

  12. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    International Nuclear Information System (INIS)

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or 'wear out' failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values 1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes. We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1-2% can be obtained using typically 1000 simulations. The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η. Examples are given of the results obtained using three different models: (1) the Eyring-Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of

  13. An overview of the recent advances in delay-time-based maintenance modelling

    International Nuclear Information System (INIS)

    Wang, Wenbin

    2012-01-01

    Industrial plant maintenance is an area which has enormous potential to be improved. It is also an area attracted significant attention from mathematical modellers because of the random phenomenon of plant failures. This paper reviews the recent advances in delay-time-based maintenance modelling, which is one of the mathematical techniques for optimising inspection planning and related problems. The delay-time is a concept that divides a plant failure process into two stages: from new until the point of an identifiable defect, and then from this point to failure. The first stage is called the normal working stage and the second stage is called the failure delay-time stage. If the distributions of the two stages can be quantified, the relationship between the number of failures and the inspection interval can be readily established. This can then be used for optimizing the inspection interval and other related decision variables. In this review, we pay particular attention to new methodological developments and industrial applications of the delay-time-based models over the last few decades. The use of the delay-time concept and modeling techniques in other areas rather than in maintenance is also reviewed. Future research directions are also highlighted. - Highlights: ► Reviewed the recent advances in delay-time-based maintenance models and applications. ► Compared the delay-time-based models with other models. ► Focused on methodologies and applications. ► Pointed out future research directions.

  14. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    Science.gov (United States)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  15. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    Science.gov (United States)

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  16. A new method for explicit modelling of single failure event within different common cause failure groups

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Redundancy and diversity are the main principles of the safety systems in the nuclear industry. Implementation of safety components redundancy has been acknowledged as an effective approach for assuring high levels of system reliability. The existence of redundant components, identical in most of the cases, implicates a probability of their simultaneous failure due to a shared cause—a common cause failure. This paper presents a new method for explicit modelling of single component failure event within multiple common cause failure groups simultaneously. The method is based on a modification of the frequently utilised Beta Factor parametric model. The motivation for development of this method lays in the fact that one of the most widespread softwares for fault tree and event tree modelling as part of the probabilistic safety assessment does not comprise the option for simultaneous assignment of single failure event to multiple common cause failure groups. In that sense, the proposed method can be seen as an advantage of the explicit modelling of common cause failures. A standard standby safety system is selected as a case study for application and study of the proposed methodology. The results and insights implicate improved, more transparent and more comprehensive models within probabilistic safety assessment.

  17. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  18. Modeling dynamic effects of promotion on interpurchase times

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractIn this paper we put forward a duration model to analyze the dynamic effects of marketing-mix variables on interpurchase times. We extend the accelerated failure-time model with an autoregressive structure. An important feature of our model is that it allows for different long-run and

  19. The multi-class binomial failure rate model for the treatment of common-cause failures

    International Nuclear Information System (INIS)

    Hauptmanns, U.

    1995-01-01

    The impact of common cause failures (CCF) on PSA results for NPPs is in sharp contrast with the limited quality which can be achieved in their assessment. This is due to the dearth of observations and cannot be remedied in the short run. Therefore the methods employed for calculating failure rates should be devised such as to make the best use of the few available observations on CCF. The Multi-Class Binomial Failure Rate (MCBFR) Model achieves this by assigning observed failures to different classes according to their technical characteristics and applying the BFR formalism to each of these. The results are hence determined by a superposition of BFR type expressions for each class, each of them with its own coupling factor. The model thus obtained flexibly reproduces the dependence of CCF rates on failure multiplicity insinuated by the observed failure multiplicities. This is demonstrated by evaluating CCFs observed for combined impulse pilot valves in German NPPs. (orig.) [de

  20. Model-based failure detection for cylindrical shells from noisy vibration measurements.

    Science.gov (United States)

    Candy, J V; Fisher, K A; Guidry, B L; Chambers, D H

    2014-12-01

    Model-based processing is a theoretically sound methodology to address difficult objectives in complex physical problems involving multi-channel sensor measurement systems. It involves the incorporation of analytical models of both physical phenomenology (complex vibrating structures, noisy operating environment, etc.) and the measurement processes (sensor networks and including noise) into the processor to extract the desired information. In this paper, a model-based methodology is developed to accomplish the task of online failure monitoring of a vibrating cylindrical shell externally excited by controlled excitations. A model-based processor is formulated to monitor system performance and detect potential failure conditions. The objective of this paper is to develop a real-time, model-based monitoring scheme for online diagnostics in a representative structural vibrational system based on controlled experimental data.

  1. Matching the results of a theoretical model with failure rates obtained from a population of non-nuclear pressure vessels

    International Nuclear Information System (INIS)

    Harrop, L.P.

    1982-02-01

    Failure rates for non-nuclear pressure vessel populations are often regarded as showing a decrease with time. Empirical evidence can be cited which supports this view. On the other hand theoretical predictions of PWR type reactor pressure vessel failure rates have shown an increasing failure rate with time. It is shown that these two situations are not necessarily incompatible. If adjustments are made to the input data of the theoretical model to treat a non-nuclear pressure vessel population, the model can produce a failure rate which decreases with time. These adjustments are explained and the results obtained are shown. (author)

  2. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    Science.gov (United States)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  3. A Bayesian network approach for modeling local failure in lung cancer

    International Nuclear Information System (INIS)

    Oh, Jung Hun; Craft, Jeffrey; Al Lozi, Rawan; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O; Bradley, Jeffrey D; El Naqa, Issam

    2011-01-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  4. Pig models for the human heart failure syndrome

    DEFF Research Database (Denmark)

    Hunter, Ingrid; Terzic, Dijana; Zois, Nora Elisabeth

    2014-01-01

    Human heart failure remains a challenging illness despite advances in the diagnosis and treatment of heart failure patients. There is a need for further improvement of our understanding of the failing myocardium and its molecular deterioration. Porcine models provide an important research tool...... in this respect as molecular changes can be examined in detail, which is simply not feasible in human patients. However, the human heart failure syndrome is based on symptoms and signs, where pig models mostly mimic the myocardial damage, but without decisive data on clinical presentation and, therefore, a heart...... to elucidate the human heart failure syndrome....

  5. Centrifuge model test of rock slope failure caused by seismic excitation. Plane failure of dip slope

    International Nuclear Information System (INIS)

    Ishimaru, Makoto; Kawai, Tadashi

    2008-01-01

    Recently, it is necessary to assess quantitatively seismic safety of critical facilities against the earthquake induced rock slope failure from the viewpoint of seismic PSA. Under these circumstances, it is essential to evaluate more accurately the possibilities of rock slope failure and the potential failure boundary, which are triggered by earthquake ground motions. The purpose of this study is to analyze dynamic failure characteristics of rock slopes by centrifuge model tests for verification and improvement of the analytical methods. We conducted a centrifuge model test using a dip slope model with discontinuities limitated by Teflon sheets. The centrifugal acceleration was 50G, and the acceleration amplitude of input sin waves increased gradually at every step. The test results were compared with safety factors of the stability analysis based on the limit equilibrium concept. Resultant conclusions are mainly as follows: (1) The slope model collapsed when it was excited by the sine wave of 400gal, which was converted to real field scale, (2) Artificial discontinuities were considerably concerned in the collapse, and the type of collapse was plane failure, (3) From response acceleration records observed at the slope model, we can say that tension cracks were generated near the top of the slope model during excitation, and that might be cause of the collapse, (4) By considering generation of the tension cracks in the stability analysis, correspondence of the analytical results and the experimental results improved. From the obtained results, we need to consider progressive failure in evaluating earthquake induced rock slope failure. (author)

  6. A model for predicting pellet-cladding interaction induced fuel rod failure, based on nonlinear fracture mechanics

    International Nuclear Information System (INIS)

    Jernkvist, L.O.

    1993-01-01

    A model for predicting pellet-cladding mechanical interaction induced fuel rod failure, suitable for implementation in finite element fuel-performance codes, is presented. Cladding failure is predicted by explicitly modelling the propagation of radial cracks under varying load conditions. Propagation is assumed to be due to either iodine induced stress corrosion cracking or ductile fracture. Nonlinear fracture mechanics concepts are utilized in modelling these two mechanisms of crack growth. The novelty of this approach is that the development of cracks, which may ultimately lead to fuel rod failure, can be treated as a dynamic and time-dependent process. The influence of cyclic loading, ramp rates and material creep on the failure mechanism can thereby be investigated. Results of numerical calculations, in which the failure model has been used to study the dependence of cladding creep rate on crack propagation velocity, are presented. (author)

  7. Failure prediction using machine learning and time series in optical network.

    Science.gov (United States)

    Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo

    2017-08-07

    In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.

  8. Failure Propagation Modeling and Analysis via System Interfaces

    Directory of Open Access Journals (Sweden)

    Lin Zhao

    2016-01-01

    Full Text Available Safety-critical systems must be shown to be acceptably safe to deploy and use in their operational environment. One of the key concerns of developing safety-critical systems is to understand how the system behaves in the presence of failures, regardless of whether that failure is triggered by the external environment or caused by internal errors. Safety assessment at the early stages of system development involves analysis of potential failures and their consequences. Increasingly, for complex systems, model-based safety assessment is becoming more widely used. In this paper we propose an approach for safety analysis based on system interface models. By extending interaction models on the system interface level with failure modes as well as relevant portions of the physical system to be controlled, automated support could be provided for much of the failure analysis. We focus on fault modeling and on how to compute minimal cut sets. Particularly, we explore state space reconstruction strategy and bounded searching technique to reduce the number of states that need to be analyzed, which remarkably improves the efficiency of cut sets searching algorithm.

  9. Timing analysis of PWR fuel pin failures

    International Nuclear Information System (INIS)

    Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J.; Straka, M.

    1992-09-01

    Research has been conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B ampersand W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin bumup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PFL/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B ampersand W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B ampersand W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design bumup. Using peaking factors commensurate widi actual bumups would result in longer intervals for both reactor designs. This document also contains appendices A through J of this report

  10. A Zebrafish Heart Failure Model for Assessing Therapeutic Agents.

    Science.gov (United States)

    Zhu, Xiao-Yu; Wu, Si-Qi; Guo, Sheng-Ya; Yang, Hua; Xia, Bo; Li, Ping; Li, Chun-Qi

    2018-03-20

    Heart failure is a leading cause of death and the development of effective and safe therapeutic agents for heart failure has been proven challenging. In this study, taking advantage of larval zebrafish, we developed a zebrafish heart failure model for drug screening and efficacy assessment. Zebrafish at 2 dpf (days postfertilization) were treated with verapamil at a concentration of 200 μM for 30 min, which were determined as optimum conditions for model development. Tested drugs were administered into zebrafish either by direct soaking or circulation microinjection. After treatment, zebrafish were randomly selected and subjected to either visual observation and image acquisition or record videos under a Zebralab Blood Flow System. The therapeutic effects of drugs on zebrafish heart failure were quantified by calculating the efficiency of heart dilatation, venous congestion, cardiac output, and blood flow dynamics. All 8 human heart failure therapeutic drugs (LCZ696, digoxin, irbesartan, metoprolol, qiliqiangxin capsule, enalapril, shenmai injection, and hydrochlorothiazide) showed significant preventive and therapeutic effects on zebrafish heart failure (p failure model developed and validated in this study could be used for in vivo heart failure studies and for rapid screening and efficacy assessment of preventive and therapeutic drugs.

  11. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T [The Cleveland Clinic Foundation, Cleveland, OH (United States)

    2016-06-15

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  12. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    International Nuclear Information System (INIS)

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T

    2016-01-01

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  13. Validation of the Seattle Heart Failure Model (SHFM) in Heart Failure Population

    International Nuclear Information System (INIS)

    Hussain, S.; Kayani, A.M.; Munir, R.

    2014-01-01

    Objective: To determine the effectiveness of Seattle Heart Failure Model (SHFM) in a Pakistani systolic heart failure cohort in predicting mortality in this population. Study Design: Cohort study. Place and Duration of Study: The Armed Forces Institute of Cardiology - National Institute of Heart Diseases, Rawalpindi, from March 2011 to March 2012. Methodology: One hundred and eighteen patients with heart failure (HF) from the registry were followed for one year. Their 1-year mortality was calculated using the SHFM software on their enrollment into the registry. After 1-year predicted 1-year mortality was compared with the actual 1-year mortality of these patients. Results: The mean age was 41.6 +- 14.9 years (16 - 78 years). There were 73.7% males and 26.3% females. One hundred and fifteen patients were in NYHA class III or IV. Mean ejection fraction in these patients was 23 +- 9.3%. Mean brain natriuretic peptide levels were 1230 A+- 1214 pg/mL. Sensitivity of the model was 89.3% with 71.1% specificity, 49% positive predictive value and 95.5% negative predictive value. The accuracy of the model was 75.4%. In Roc analysis, AUC for the SHFM was 0.802 (p<0.001). conclusion: SHFM was found to be reliable in predicting one year mortality among patients with heart failure in the pakistan patients. (author)

  14. VALIDATION OF SPRING OPERATED PRESSURE RELIEF VALVE TIME TO FAILURE AND THE IMPORTANCE OF STATISTICALLY SUPPORTED MAINTENANCE INTERVALS

    Energy Technology Data Exchange (ETDEWEB)

    Gross, R; Stephen Harris, S

    2009-02-18

    The Savannah River Site operates a Relief Valve Repair Shop certified by the National Board of Pressure Vessel Inspectors to NB-23, The National Board Inspection Code. Local maintenance forces perform inspection, testing, and repair of approximately 1200 spring-operated relief valves (SORV) each year as the valves are cycled in from the field. The Site now has over 7000 certified test records in the Computerized Maintenance Management System (CMMS); a summary of that data is presented in this paper. In previous papers, several statistical techniques were used to investigate failure on demand and failure rates including a quantal response method for predicting the failure probability as a function of time in service. The non-conservative failure mode for SORV is commonly termed 'stuck shut'; industry defined as the valve opening at greater than or equal to 1.5 times the cold set pressure. Actual time to failure is typically not known, only that failure occurred some time since the last proof test (censored data). This paper attempts to validate the assumptions underlying the statistical lifetime prediction results using Monte Carlo simulation. It employs an aging model for lift pressure as a function of set pressure, valve manufacturer, and a time-related aging effect. This paper attempts to answer two questions: (1) what is the predicted failure rate over the chosen maintenance/ inspection interval; and do we understand aging sufficient enough to estimate risk when basing proof test intervals on proof test results?

  15. Canonical failure modes of real-time control systems: insights from cognitive theory

    Science.gov (United States)

    Wallace, Rodrick

    2016-04-01

    Newly developed necessary conditions statistical models from cognitive theory are applied to generalisation of the data-rate theorem for real-time control systems. Rather than graceful degradation under stress, automatons and man/machine cockpits appear prone to characteristic sudden failure under demanding fog-of-war conditions. Critical dysfunctions span a spectrum of phase transition analogues, ranging from a ground state of 'all targets are enemies' to more standard data-rate instabilities. Insidious pathologies also appear possible, akin to inattentional blindness consequent on overfocus on an expected pattern. Via no-free-lunch constraints, different equivalence classes of systems, having structure and function determined by 'market pressures', in a large sense, will be inherently unreliable under different but characteristic canonical stress landscapes, suggesting that deliberate induction of failure may often be relatively straightforward. Focusing on two recent military case histories, these results provide a caveat emptor against blind faith in the current path-dependent evolutionary trajectory of automation for critical real-time processes.

  16. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  17. Balancing burn-in and mission times in environments with catastrophic and repairable failures

    International Nuclear Information System (INIS)

    Bebbington, Mark; Lai, C.-D.; Zitikis, Ricardas

    2009-01-01

    In a system subject to both repairable and catastrophic (i.e., nonrepairable) failures, 'mission success' can be defined as operating for a specified time without a catastrophic failure. We examine the effect of a burn-in process of duration τ on the mission time x, and also on the probability of mission success, by introducing several functions and surfaces on the (τ,x)-plane whose extrema represent suitable choices for the best burn-in time, and the best burn-in time for a desired mission time. The corresponding curvature functions and surfaces provide information about probabilities and expectations related to these burn-in and mission times. Theoretical considerations are illustrated with both parametric and, separating the failures by failure mode, nonparametric analyses of a data set, and graphical visualization of results.

  18. The analysis of failure data in the presence of critical and degraded failures

    International Nuclear Information System (INIS)

    Haugen, Knut; Hokstad, Per; Sandtorv, Helge

    1997-01-01

    Reported failures are often classified into severityclasses, e.g., as critical or degraded. The critical failures correspond to loss of function(s) and are those of main concern. The rate of critical failures is usually estimated by the number of observed critical failures divided by the exposure time, thus ignoring the observed degraded failures. In the present paper failure data are analyzed, applying an alternative estimate for the critical failure rate, also taking the number of observed degraded failures into account. The model includes two alternative failure mechanisms, one being of the shock type, immediately leading to a critical failure, another resulting in a gradual deterioration, leading to a degraded failure before the critical failure occurs. Failure data on safety valves from the OREDA (Offshore REliability DAta) data base are analyzed using this model. The estimate for the critical failure rate is obtained and compared with the standard estimate

  19. The Influence of a High Salt Diet on a Rat Model of Isoproterenol-Induced Heart Failure

    Science.gov (United States)

    Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4 weeks) isoproterenol (ISO) infusion exacerbated cardiomyopathy in Spontaneously Hypertensive Heart Failure (SHHF) rats. Others have shown...

  20. Time-dependent evolution of rock slopes by a multi-modelling approach

    Science.gov (United States)

    Bozzano, F.; Della Seta, M.; Martino, S.

    2016-06-01

    This paper presents a multi-modelling approach that incorporates contributions from morpho-evolutionary modelling, detailed engineering-geological modelling and time-dependent stress-strain numerical modelling to analyse the rheological evolution of a river valley slope over approximately 102 kyr. The slope is located in a transient, tectonically active landscape in southwestern Tyrrhenian Calabria (Italy), where gravitational processes drive failures in rock slopes. Constraints on the valley profile development were provided by a morpho-evolutionary model based on the correlation of marine and river strath terraces. Rock mass classes were identified through geomechanical parameters that were derived from engineering-geological surveys and outputs of a multi-sensor slope monitoring system. The rock mass classes were associated to lithotechnical units to obtain a high-resolution engineering-geological model along a cross section of the valley. Time-dependent stress-strain numerical modelling reproduced the main morpho-evolutionary stages of the valley slopes. The findings demonstrate that a complex combination of eustatism, uplift and Mass Rock Creep (MRC) deformations can lead to first-time failures of rock slopes when unstable conditions are encountered up to the generation of stress-controlled shear zones. The multi-modelling approach enabled us to determine that such complex combinations may have been sufficient for the first-time failure of the S. Giovanni slope at approximately 140 ka (MIS 7), even without invoking any trigger. Conversely, further reactivations of the landslide must be related to triggers such as earthquakes, rainfall and anthropogenic activities. This failure involved a portion of the slope where a plasticity zone resulted from mass rock creep that evolved with a maximum strain rate of 40% per thousand years, after the formation of a river strath terrace. This study demonstrates that the multi-modelling approach presented herein is a useful

  1. Time-to-failure analysis of 5 nm amorphous Ru(P) as a copper diffusion barrier

    International Nuclear Information System (INIS)

    Henderson, Lucas B.; Ekerdt, John G.

    2009-01-01

    Evaluation of chemical vapor deposited amorphous ruthenium-phosphorous alloy as a copper interconnect diffusion barrier is reported. Approximately 5 nm-thick Ru(P) and TaN films in Cu/Ru(P)/SiO 2 /p-Si and Cu/TaN/SiO 2 /p-Si stacks are subjected to bias-temperature stress at electric fields from 2.0 MV/cm to 4.0 MV/cm and temperatures from 200 deg. C to 300 deg. C . Time-to-failure measurements suggest that chemical vapor deposited Ru(P) is comparable to physical vapor deposited TaN in preventing Cu diffusion. The activation energy of failure for stacks using Ru(P) as a liner is determined to be 1.83 eV in the absence of an electric field. Multiple models of dielectric failure, including the E and Schottky-type √E models indicate that Ru(P) is acceptable for use as a diffusion barrier at conditions likely in future technology generations

  2. Accelerated failure time models for semi-competing risks data in the presence of complex censoring.

    Science.gov (United States)

    Lee, Kyu Ha; Rondeau, Virginie; Haneuse, Sebastien

    2017-12-01

    Statistical analyses that investigate risk factors for Alzheimer's disease (AD) are often subject to a number of challenges. Some of these challenges arise due to practical considerations regarding data collection such that the observation of AD events is subject to complex censoring including left-truncation and either interval or right-censoring. Additional challenges arise due to the fact that study participants under investigation are often subject to competing forces, most notably death, that may not be independent of AD. Towards resolving the latter, researchers may choose to embed the study of AD within the "semi-competing risks" framework for which the recent statistical literature has seen a number of advances including for the so-called illness-death model. To the best of our knowledge, however, the semi-competing risks literature has not fully considered analyses in contexts with complex censoring, as in studies of AD. This is particularly the case when interest lies with the accelerated failure time (AFT) model, an alternative to the traditional multiplicative Cox model that places emphasis away from the hazard function. In this article, we outline a new Bayesian framework for estimation/inference of an AFT illness-death model for semi-competing risks data subject to complex censoring. An efficient computational algorithm that gives researchers the flexibility to adopt either a fully parametric or a semi-parametric model specification is developed and implemented. The proposed methods are motivated by and illustrated with an analysis of data from the Adult Changes in Thought study, an on-going community-based prospective study of incident AD in western Washington State. © 2017, The International Biometric Society.

  3. Modelling the failure modes in geobag revetments.

    Science.gov (United States)

    Akter, A; Crapper, M; Pender, G; Wright, G; Wong, W S

    2012-01-01

    In recent years, sand filled geotextile bags (geobags) have been used as a means of long-term riverbank revetment stabilization. However, despite their deployment in a significant number of locations, the failure modes of such structures are not well understood. Three interactions influence the geobag performance, i.e. geobag-geobag, geobag-water flow and geobag-water flow-river bank. The aim of the research reported here is to develop a detailed understanding of the failure mechanisms in a geobag revetment using a discrete element model (DEM) validated by laboratory data. The laboratory measured velocity data were used for preparing a mapped velocity field for a coupled DEM simulation of geobag revetment failure. The validated DEM model could identify well the critical bag location in varying water depths. Toe scour, one of the major instability factors in revetments, and its influence on the bottom-most layer of the bags were also reasonably represented in this DEM model. It is envisaged that the use of a DEM model will provide more details on geobag revetment performance in riverbanks.

  4. Dynamic computed tomography (CT) in the rat kidney and application to acute renal failure models

    International Nuclear Information System (INIS)

    Ishikawa, Isao; Saito, Tadashi; Ishii, Hirofumi; Bansho, Junichi; Koyama, Yukinori; Tobita, Akira

    1995-01-01

    Renal dynamic CT scanning is suitable for determining the excretion of contrast medium in the cortex and medulla of the kidney, which is valuable for understanding the pathogenesis of disease processes in various conditions. This form of scanning would be convenient for use, if a method of application to the rat kidney were available. Therefore, we developed a method of applying renal dynamic CT to rats and evaluated the cortical and medullary curves, e.g., the corticomedullary junction time which is correlated to creatinine clearance, in various rat models of acute renal failure. The rat was placed in a 10deg oblique position and a bilateral hilar slice was obtained before and 5, 10, 15, 20, 25, 30, 40, 50, 60, 80, 100, 120, 140, 160 and 180 sec after administering 0.5 ml of contrast medium using Somatom DR. The width of the slice was 4 mm and the scan time was 3 sec. The corticomedullary junction time in normal rats was 23.0±10.5 sec, the peak value of the cortical curve was 286.3±76.7 Hounsfield Unit (HU) and the peak value of the medullary curve was 390.1±66.2 HU. Corticomedullary junction time after exposure of the kidney was prolonged compared to that of the unexposed kidney. In rats with acute renal failure, the excretion pattern of contrast medium was similar in both the glycerol- and HgCl2-induced acute renal failure models. The peak values of the cortical curve were maintained three hours after a clamp was placed at the hilar region of the kidney for one hour, and the peak values of the medullary curve were maintained during the administration of 10μg/kg/min of angiotensin II. Dynamic CT curves in the acute renal failure models examined were slightly different from those in human acute renal failure. These results suggest that rats do not provide an ideal model for human acute renal failure. However, the application of dynamic CT to the rat kidney models was valuable for estimating the pathogenesis of various human kidney diseases. (author)

  5. Failure modes and natural control time for distributed vibrating systems

    International Nuclear Information System (INIS)

    Reid, R.M.

    1994-01-01

    The eigenstructure of the Gram matrix of frequency exponentials is used to study linear vibrating systems of hyperbolic type with distributed control. Using control norm as a practical measure of controllability and the vibrating string as a prototype, it is demonstrated that hyperbolic systems have a natural control time, even when only finitely many modes are excited. For shorter control times there are identifiable control failure modes which can be steered to zero only with very high cost in control norm. Both natural control time and the associated failure modes are constructed for linear fluids, strings, and beams, making note of the essential algorithms and Mathematica code, and displaying results graphically

  6. Application of accelerated failure time models for breast cancer patients' survival in Kurdistan Province of Iran.

    Science.gov (United States)

    Karimi, Asrin; Delpisheh, Ali; Sayehmiri, Kourosh

    2016-01-01

    Breast cancer is the most common cancer and the second common cause of cancer-induced mortalities in Iranian women. There has been a rapid development in hazard models and survival analysis in the last decade. The aim of this study was to evaluate the prognostic factors of overall survival (OS) in breast cancer patients using accelerated failure time models (AFT). This was a retrospective-analytic cohort study. About 313 women with a pathologically proven diagnosis of breast cancer who had been treated during a 7-year period (since January 2006 until March 2014) in Sanandaj City, Kurdistan Province of Iran were recruited. Performance among AFT was assessed using the goodness of fit methods. Discrimination among the exponential, Weibull, generalized gamma, log-logistic, and log-normal distributions was done using Akaik information criteria and maximum likelihood. The 5 years OS was 75% (95% CI = 74.57-75.43). The main results in terms of survival were found for the different categories of the clinical stage covariate, tumor metastasis, and relapse of cancer. Survival time in breast cancer patients without tumor metastasis and relapse were 4, 2-fold longer than other patients with metastasis and relapse, respectively. One of the most important undermining prognostic factors in breast cancer is metastasis; hence, knowledge of the mechanisms of metastasis is necessary to prevent it so occurrence and treatment of metastatic breast cancer and ultimately extend the lifetime of patients.

  7. The Statistical Analysis of Failure Time Data

    CERN Document Server

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  8. Analysis of lower head failure with simplified models and a finite element code

    Energy Technology Data Exchange (ETDEWEB)

    Koundy, V. [CEA-IPSN-DPEA-SEAC, Service d' Etudes des Accidents, Fontenay-aux-Roses (France); Nicolas, L. [CEA-DEN-DM2S-SEMT, Service d' Etudes Mecaniques et Thermiques, Gif-sur-Yvette (France); Combescure, A. [INSA-Lyon, Lab. Mecanique des Solides, Villeurbanne (France)

    2001-07-01

    The objective of the OLHF (OECD lower head failure) experiments is to characterize the timing, mode and size of lower head failure under high temperature loading and reactor coolant system pressure due to a postulated core melt scenario. Four tests have been performed at Sandia National Laboratories (USA), in the frame of an OECD project. The experimental results have been used to develop and validate predictive analysis models. Within the framework of this project, several finite element calculations were performed. In parallel, two simplified semi-analytical methods were developed in order to get a better understanding of the role of various parameters on the creep phenomenon, e.g. the behaviour of the lower head material and its geometrical characteristics on the timing, mode and location of failure. Three-dimensional modelling of crack opening and crack propagation has also been carried out using the finite element code Castem 2000. The aim of this paper is to present the two simplified semi-analytical approaches and to report the status of the 3D crack propagation calculations. (authors)

  9. Semi-parametric proportional intensity models robustness for right-censored recurrent failure data

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, S.T. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States); Landers, T.L. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)]. E-mail: landers@ou.edu; Rhoads, T.R. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)

    2005-10-01

    This paper reports the robustness of the four proportional intensity (PI) models: Prentice-Williams-Peterson-gap time (PWP-GT), PWP-total time (PWP-TT), Andersen-Gill (AG), and Wei-Lin-Weissfeld (WLW), for right-censored recurrent failure event data. The results are beneficial to practitioners in anticipating the more favorable engineering application domains and selecting appropriate PI models. The PWP-GT and AG prove to be models of choice over ranges of sample sizes, shape parameters, and censoring severity. At the smaller sample size (U=60), where there are 30 per class for a two-level covariate, the PWP-GT proves to perform well for moderate right-censoring (P {sub c}{<=}0.8), where 80% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8{<=}{delta}{<=}1.8). For the large sample size (U=180), the PWP-GT performs well for severe right-censoring (0.8

    failures (power-law NHPP shape parameter in the range of 0.8{<=}{delta}{<=}2.0). The AG model proves to outperform the PWP-TT and WLW for stationary processes (HPP) across a wide range of right-censorship (0.0{<=}P {sub c}{<=}1.0) and for sample sizes of 60 or more.

  10. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    International Nuclear Information System (INIS)

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  11. Application of nonhomogeneous Poisson process to reliability analysis of repairable systems of a nuclear power plant with rates of occurrence of failures time-dependent

    International Nuclear Information System (INIS)

    Saldanha, Pedro L.C.; Simone, Elaine A. de; Melo, Paulo Fernando F.F. e

    1996-01-01

    Aging is used to mean the continuous process which physical characteristics of a system, a structure or an equipment changes with time or use. Their effects are increases in failure probabilities of a system, a structure or an equipment, and their are calculated using time-dependent failure rate models. The purpose of this paper is to present an application of the nonhomogeneous Poisson process as a model to study rates of occurrence of failures when they are time-dependent. To this application, an analysis of reliability of service water pumps of a typical nuclear power plant is made, as long as the pumps are effectively repaired components. (author)

  12. A probability model for the failure of pressure containing parts

    International Nuclear Information System (INIS)

    Thomas, H.M.

    1978-01-01

    The model provides a method of estimating the order of magnitude of the leakage failure probability of pressure containing parts. It is a fatigue based model which makes use of the statistics available for both specimens and vessels. Some novel concepts are introduced but essentially the model simply quantifies the obvious i.e. that failure probability increases with increases in stress levels, number of cycles, volume of material and volume of weld metal. A further model based on fracture mechanics estimates the catastrophic fraction of leakage failures. (author)

  13. 2D Modeling of Flood Propagation due to the Failure of Way Ela Natural Dam

    Directory of Open Access Journals (Sweden)

    Yakti Bagus Pramono

    2018-01-01

    Full Text Available A dam break induced-flood propagation modeling is needed to reduce the losses of any potential dam failure. On the 25 July 2013, there was a dam break generated flood due to the failure of Way Ela Natural Dam that severely damaged houses and various public facilities. This study simulated the flooding induced by the failure of Way Ela Natural Dam. A two-dimensional (2D numerical model, HEC-RAS v.5, is used to simulate the overland flow. The dam failure itself is simulated using HECHMSv.4. The results of this study, the flood inundation, flood depth, and flood arrival time are verified by using available secondary data. These informations are very important to propose mitigation plans with respect to possible dam break in the future.

  14. Stochastic failure modelling of unidirectional composite ply failure

    International Nuclear Information System (INIS)

    Whiteside, M.B.; Pinho, S.T.; Robinson, P.

    2012-01-01

    Stochastic failure envelopes are generated through parallelised Monte Carlo Simulation of a physically based failure criteria for unidirectional carbon fibre/epoxy matrix composite plies. Two examples are presented to demonstrate the consequence on failure prediction of both statistical interaction of failure modes and uncertainty in global misalignment. Global variance-based Sobol sensitivity indices are computed to decompose the observed variance within the stochastic failure envelopes into contributions from physical input parameters. The paper highlights a selection of the potential advantages stochastic methodologies offer over the traditional deterministic approach.

  15. Numerical investigations of rib fracture failure models in different dynamic loading conditions.

    Science.gov (United States)

    Wang, Fang; Yang, Jikuang; Miller, Karol; Li, Guibing; Joldes, Grand R; Doyle, Barry; Wittek, Adam

    2016-01-01

    Rib fracture is one of the most common thoracic injuries in vehicle traffic accidents that can result in fatalities associated with seriously injured internal organs. A failure model is critical when modelling rib fracture to predict such injuries. Different rib failure models have been proposed in prediction of thorax injuries. However, the biofidelity of the fracture failure models when varying the loading conditions and the effects of a rib fracture failure model on prediction of thoracic injuries have been studied only to a limited extent. Therefore, this study aimed to investigate the effects of three rib failure models on prediction of thoracic injuries using a previously validated finite element model of the human thorax. The performance and biofidelity of each rib failure model were first evaluated by modelling rib responses to different loading conditions in two experimental configurations: (1) the three-point bending on the specimen taken from rib and (2) the anterior-posterior dynamic loading to an entire bony part of the rib. Furthermore, the simulation of the rib failure behaviour in the frontal impact to an entire thorax was conducted at varying velocities and the effects of the failure models were analysed with respect to the severity of rib cage damages. Simulation results demonstrated that the responses of the thorax model are similar to the general trends of the rib fracture responses reported in the experimental literature. However, they also indicated that the accuracy of the rib fracture prediction using a given failure model varies for different loading conditions.

  16. A quasi-static algorithm that includes effects of characteristic time scales for simulating failures in brittle materials

    KAUST Repository

    Liu, Jinxing; El Sayed, Tamer S.

    2013-01-01

    When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice

  17. Calculation of fuel pin failure timing under LOCA conditions

    International Nuclear Information System (INIS)

    Jones, K.R.; Wade, N.L.; Siefken, L.J.; Straka, M.; Katsma, K.R.

    1991-10-01

    The objective of this research was to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B ampersand W) design (Oconee) and a Westinghouse (W) 4-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin burnup, axial peaking factor, break size, emergency core cooling system (ECCS) availability, and main coolant pump trip on these items. The analysis was performed using a four-code approach, comprised of FRAPCON-2, SCDAP/RELAP5/MOD3, TRAC-PF1/MOD1, and FRAP-T6. In addition to the calculation of timing results, this analysis provided a comparison of the capabilities of SCDAP/RELAP5/MOD3 with TRAC-PF1/MOD1 for large-break LOCA analysis. This paper discusses the methodology employed and the code development efforts required to implement the methodology. The shortest time intervals calculated between initiation of containment isolation and fuel pin failure were 11.4 s and 19.1 for the B ampersand W and W plants, respectively. The FRAP-T6 fuel pin failure times calculated using thermal-hydraulic data generated by SCDAP/RELAP5/MOD3 were more conservative than those calculated using data generated by TRAC-PF1/MOD1. 18 refs., 7 figs., 4 tabs

  18. Considerations on assessment of different time depending models adequacy

    International Nuclear Information System (INIS)

    Constantinescu, C.

    2015-01-01

    The operating period of nuclear power plants can be prolonged if it can be shown that their safety has remained on a high level, and for this, it is necessary to estimate how the aged systems, structures and components (SSCs) influence the NPP reliability and safety. To emphasize the ageing aspects the case study presented in this paper will assess different time depending models for rate of occurrence of failures with the goal to obtain the best fitting model. A sensitivity analysis for the impact of burn-in failures was performed to improve the result of the goodness of fit test. Based on the analysis results, a conclusion about the existence or the absence of an ageing trend could be developed. A sensitivity analysis regarding of the reliability parameters was performed, and the results were used to observe the impact over the time-dependent rate of occurrence of failures. (authors)

  19. Data analysis using the Binomial Failure Rate common cause model

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1983-09-01

    This report explains how to use the Binomial Failure Rate (BFR) method to estimate common cause failure rates. The entire method is described, beginning with the conceptual model, and covering practical issues of data preparation, treatment of variation in the failure rates, Bayesian estimation of the quantities of interest, checking the model assumptions for lack of fit to the data, and the ultimate application of the answers

  20. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property value at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.

  1. Structures for common-cause failure analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1981-01-01

    Common-cause failure methodology and terminology have been reviewed and structured to provide a systematical basis for addressing and developing models and methods for quantification. The structure is based on (1) a specific set of definitions, (2) categories based on the way faults are attributable to a common cause, and (3) classes based on the time of entry and the time of elimination of the faults. The failure events are then characterized by their likelihood or frequency and the average residence time. The structure provides a basis for selecting computational models, collecting and evaluating data and assessing the importance of various failure types, and for developing effective defences against common-cause failure. The relationships of this and several other structures are described

  2. Fission product release modelling for application of fuel-failure monitoring and detection - An overview

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, B.J., E-mail: lewibre@gmail.com [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario, K7K 7B4 (Canada); Chan, P.K.; El-Jaby, A. [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario, K7K 7B4 (Canada); Iglesias, F.C.; Fitchett, A. [Candesco Division of Kinectrics Inc., 26 Wellington Street East, 3rd Floor, Toronto, Ontario M5E 1S2 (Canada)

    2017-06-15

    A review of fission product release theory is presented in support of fuel-failure monitoring analysis for the characterization and location of defective fuel. This work is used to describe: (i) the development of the steady-state Visual-DETECT code for coolant activity analysis to characterize failures in the core and the amount of tramp uranium; (ii) a generalization of this model in the STAR code for prediction of the time-dependent release of iodine and noble gas fission products to the coolant during reactor start-up, steady-state, shutdown, and bundle-shifting manoeuvres; (iii) an extension of the model to account for the release of fission products that are delayed-neutron precursors for assessment of fuel-failure location; and (iv) a simplification of the steady-state model to assess the methodology proposed by WANO for a fuel reliability indicator for water-cooled reactors.

  3. Fuzzy modeling of analytical redundancy for sensor failure detection

    International Nuclear Information System (INIS)

    Tsai, T.M.; Chou, H.P.

    1991-01-01

    Failure detection and isolation (FDI) in dynamic systems may be accomplished by testing the consistency of the system via analytically redundant relations. The redundant relation is basically a mathematical model relating system inputs and dissimilar sensor outputs from which information is extracted and subsequently examined for the presence of failure signatures. Performance of the approach is often jeopardized by inherent modeling error and noise interference. To mitigate such effects, techniques such as Kalman filtering, auto-regression-moving-average (ARMA) modeling in conjunction with probability tests are often employed. These conventional techniques treat the stochastic nature of uncertainties in a deterministic manner to generate best-estimated model and sensor outputs by minimizing uncertainties. In this paper, the authors present a different approach by treating the effect of uncertainties with fuzzy numbers. Coefficients in redundant relations derived from first-principle physical models are considered as fuzzy parameters and on-line updated according to system behaviors. Failure detection is accomplished by examining the possibility that a sensor signal occurred in an estimated fuzzy domain. To facilitate failure isolation, individual FDI monitors are designed for each interested sensor

  4. Reliability analysis based on the losses from failures.

    Science.gov (United States)

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  5. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    Science.gov (United States)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  6. Comparison of a fuel sheath failure model with published experimental data

    International Nuclear Information System (INIS)

    Varty, R.L.; Rosinger, H.E.

    1982-01-01

    A fuel sheath failure model has been compared with the published results of experiments in which a Zircaloy-4 fuel sheath was subjected to a temperature ramp and a differential pressure until failure occurred. The model assumes that the deformation of the sheath is controlled by steady-state creep and that there is a relationship between tangential stress and temperature at the instant of failure. The sheath failure model predictions agree reasonably well with the experimental data. The burst temperature is slightly overpredicted by the model. The burst strain is overpredicted for small experimental burst strains but is underpredicted otherwise. The reasons for these trends are discussed and the extremely wide variation in burst strain reported in the literature is explained using the model

  7. A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models

    International Nuclear Information System (INIS)

    Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana

    2014-01-01

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model

  8. Enhancement of weld failure and tube ejection model in PENTAP program

    International Nuclear Information System (INIS)

    Jung, Jaehoon; An, Sang Mo; Ha, Kwang Soon; Kim, Hwan Yeol

    2014-01-01

    The reactor vessel pressure, the debris mass, the debris temperature, and the component of material can have an effect on the penetration tube failure modes. Furthermore, these parameters are interrelated. There are some representative severe accident codes such as MELCOR, MAAP, and PENTAP program. MELCOR decides on a penetration tube failure by its failure temperature such as 1273K simply. MAAP considers all penetration failure modes and has the most advanced model for a penetration tube failure model. However, the validation work against the experimental data is very limited. PENTAP program which evaluates the possible penetration tube failure modes such as creep failure, weld failure, tube ejection, and a long term tube failure under given accident condition was developed by KAERI. The experiment for the tube ejection is being performed by KAERI. The temperature distribution and the ablation rate of both weld and lower vessel wall can be obtained through the experiment. This paper includes the updated calculation steps for the weld failure and the tube ejection modes of the PENTAP program to apply the experimental results. PENTAP program can evaluate the possible penetration tube failure modes. It still requires a large amount of efforts to increase the prediction of failure modes. Some calculation steps are necessary for applying the experimental and the numerical data in the PENTAP program. In this study, new calculation steps are added to PENTAP program to enhance the weld failure and tube ejection models using KAERI's experimental data which are the ablation rate and temperature distribution of weld and lower vessel wall

  9. Factors Influencing the Predictive Power of Models for Predicting Mortality and/or Heart Failure Hospitalization in Patients With Heart Failure

    NARCIS (Netherlands)

    Ouwerkerk, Wouter; Voors, Adriaan A.; Zwinderman, Aeilko H.

    2014-01-01

    The present paper systematically reviews and compares existing prediction models in order to establish the strongest variables, models, and model characteristics in patients with heart failure predicting outcome. To improve decision making accurately predicting mortality and heart-failure

  10. Ergodicity of forward times of the renewal process in a block-based inspection model using the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan

    2012-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practice have been reported in many papers and case studies. For a system subject to a few major failure modes, component based delay time models have been developed under the assumptions of an age-based inspection policy. An age-based inspection assumes that an inspection is scheduled according to the age of the component, and if there is a failure renewal, the next inspection is always, say τ times, from the time of the failure renewal. This applies to certain cases, particularly important plant items where the time since the last renewal or inspection is a key to schedule the next inspection service. However, in most cases, the inspection service is not scheduled according to the need of a particular component, rather it is scheduled according to a fixed calendar time regardless whether the component being inspected was just renewed or not. This policy is called a block-based inspection which has the advantage of easy planning and is particularly useful for plant items which are part of a larger system to be inspected. If a block-based inspection policy is used, the time to failure since the last inspection prior to the failure for a particular item is a random variable. This time is called the forward time in this paper. To optimise the inspection interval for block-based inspections, the usual criterion functions such as expected cost or down time per unit time depend on the distribution of this forward time. We report in this paper the development of a theoretical proof that a limiting distribution for such a forward time exists if certain conditions are met. We also propose a recursive algorithm for determining such a limiting distribution. A numerical example is presented to demonstrate the existence of the limiting distribution.

  11. Predictors of incident heart failure in patients after an acute coronary syndrome: The LIPID heart failure risk-prediction model.

    Science.gov (United States)

    Driscoll, Andrea; Barnes, Elizabeth H; Blankenberg, Stefan; Colquhoun, David M; Hunt, David; Nestel, Paul J; Stewart, Ralph A; West, Malcolm J; White, Harvey D; Simes, John; Tonkin, Andrew

    2017-12-01

    Coronary heart disease is a major cause of heart failure. Availability of risk-prediction models that include both clinical parameters and biomarkers is limited. We aimed to develop such a model for prediction of incident heart failure. A multivariable risk-factor model was developed for prediction of first occurrence of heart failure death or hospitalization. A simplified risk score was derived that enabled subjects to be grouped into categories of 5-year risk varying from 20%. Among 7101 patients from the LIPID study (84% male), with median age 61years (interquartile range 55-67years), 558 (8%) died or were hospitalized because of heart failure. Older age, history of claudication or diabetes mellitus, body mass index>30kg/m 2 , LDL-cholesterol >2.5mmol/L, heart rate>70 beats/min, white blood cell count, and the nature of the qualifying acute coronary syndrome (myocardial infarction or unstable angina) were associated with an increase in heart failure events. Coronary revascularization was associated with a lower event rate. Incident heart failure increased with higher concentrations of B-type natriuretic peptide >50ng/L, cystatin C>0.93nmol/L, D-dimer >273nmol/L, high-sensitivity C-reactive protein >4.8nmol/L, and sensitive troponin I>0.018μg/L. Addition of biomarkers to the clinical risk model improved the model's C statistic from 0.73 to 0.77. The net reclassification improvement incorporating biomarkers into the clinical model using categories of 5-year risk was 23%. Adding a multibiomarker panel to conventional parameters markedly improved discrimination and risk classification for future heart failure events. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  12. Modeling Freedom From Progression for Standard-Risk Medulloblastoma: A Mathematical Tumor Control Model With Multiple Modes of Failure

    International Nuclear Information System (INIS)

    Brodin, N. Patrik; Vogelius, Ivan R.; Björk-Eriksson, Thomas; Munck af Rosenschöld, Per; Bentzen, Søren M.

    2013-01-01

    Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used to model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available

  13. Failure Diameter of PBX 9502: Simulations with the SURFplus model

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-03

    SURFplus is a reactive burn model for high explosives aimed at modelling shock initiation and propagation of detonation waves. It utilizes the SURF model for the fast hot-spot reaction plus a slow reaction for the energy released by carbon clustering. A feature of the SURF model is that there is a partially decoupling between burn rate parameters and detonation wave properties. Previously, parameters for PBX 9502 that control shock ini- tiation had been calibrated to Pop plot data (distance-of-run to detonation as a function of shock pressure initiating the detonation). Here burn rate parameters for the high pres- sure regime are adjusted to t the failure diameter and the limiting detonation speed just above the failure diameter. Simulated results are shown for an uncon ned rate stick when the 9502 diameter is slightly above and slightly below the failure diameter. Just above the failure diameter, in the rest frame of the detonation wave, the front is sonic at the PBX/air interface. As a consequence, the lead shock in the neighborhood of the interface is supported by the detonation pressure in the interior of the explosive rather than the reaction immediately behind the front. In the interior, the sonic point occurs near the end of the fast hot-spot reaction. Consequently, the slow carbon clustering reaction can not a ect the failure diameter. Below the failure diameter, the radial extent of the detonation front decreases starting from the PBX/air interface. That is, the failure starts at the PBX boundary and propagates inward to the axis of the rate stick.

  14. [Establishment of a D-galactosamine/lipopolysaccharide induced acute-on-chronic liver failure model in rats].

    Science.gov (United States)

    Liu, Xu-hua; Chen, Yu; Wang, Tai-ling; Lu, Jun; Zhang, Li-jie; Song, Chen-zhao; Zhang, Jing; Duan, Zhong-ping

    2007-10-01

    To establish a practical and reproducible animal model of human acute-on-chronic liver failure for further study of the pathophysiological mechanism of acute-on-chronic liver failure and for drug screening and evaluation in its treatment. Immunological hepatic fibrosis was induced by human serum albumin in Wistar rats. In rats with early-stage cirrhosis (fibrosis stage IV), D-galactosamine and lipopolysaccharide were administered. Mortality and survival time were recorded in 20 rats. Ten rats were sacrificed at 4, 8, and 12 hours. Liver function tests and plasma cytokine levels were measured after D-galactosamine/lipopolysaccharide administration and liver pathology was studied. Cell apoptosis was detected by terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling assay. Most of the rats treated with human albumin developed cirrhosis and fibrosis, and 90% of them died from acute liver failure after administration of D-galactosamine/lipopolysaccharide, with a mean survival time of (16.1+/-3.7) hours. Liver histopathology showed massive or submassive necrosis of the regenerated nodules, while fibrosis septa were intact. Liver function tests were compatible with massive necrosis of hepatocytes. Plasma level of TNFalpha increased significantly, parallel with the degree of the hepatocytes apoptosis. Plasma IL-10 levels increased similarly as seen in patients with acute-on-chronic liver failure. We established an animal model of acute-on-chronic liver failure by treating rats with human serum albumin and later with D-galactosamine and lipopolysaccharide. TNFalpha-mediated liver cell apoptoses plays a very important role in the pathogenesis of acute liver failure.

  15. Modeling freedom from progression for standard-risk medulloblastoma: a mathematical tumor control model with multiple modes of failure

    DEFF Research Database (Denmark)

    Brodin, Nils Patrik; Vogelius, Ivan R.; Bjørk-Eriksson, Thomas

    2013-01-01

    As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published...

  16. Multiscale modeling of ductile failure in metallic alloys

    Science.gov (United States)

    Pardoen, Thomas; Scheyvaerts, Florence; Simar, Aude; Tekoğlu, Cihan; Onck, Patrick R.

    2010-04-01

    Micromechanical models for ductile failure have been developed in the 1970s and 1980s essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale.

  17. Failure analysis of parameter-induced simulation crashes in climate models

    Science.gov (United States)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-08-01

    Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  18. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  19. A multiple shock model for common cause failures using discrete Markov chain

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Kang, Chang Soon

    1992-01-01

    The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis

  20. Evaluation of containment failure and cleanup time for Pu shots on the Z machine.

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L.

    2010-02-01

    Between November 30 and December 11, 2009 an evaluation was performed of the probability of containment failure and the time for cleanup of contamination of the Z machine given failure, for plutonium (Pu) experiments on the Z machine at Sandia National Laboratories (SNL). Due to the unique nature of the problem, there is little quantitative information available for the likelihood of failure of containment components or for the time to cleanup. Information for the evaluation was obtained from Subject Matter Experts (SMEs) at the Z machine facility. The SMEs provided the State of Knowledge (SOK) for the evaluation. There is significant epistemic- or state of knowledge- uncertainty associated with the events that comprise both failure of containment and cleanup. To capture epistemic uncertainty and to allow the SMEs to reason at the fidelity of the SOK, we used the belief/plausibility measure of uncertainty for this evaluation. We quantified two variables: the probability that the Pu containment system fails given a shot on the Z machine, and the time to cleanup Pu contamination in the Z machine given failure of containment. We identified dominant contributors for both the time to cleanup and the probability of containment failure. These results will be used by SNL management to decide the course of action for conducting the Pu experiments on the Z machine.

  1. Failure and reliability prediction by support vector machines regression of time series data

    International Nuclear Information System (INIS)

    Chagas Moura, Marcio das; Zio, Enrico; Lins, Isis Didier; Droguett, Enrique

    2011-01-01

    Support Vector Machines (SVMs) are kernel-based learning methods, which have been successfully adopted for regression problems. However, their use in reliability applications has not been widely explored. In this paper, a comparative analysis is presented in order to evaluate the SVM effectiveness in forecasting time-to-failure and reliability of engineered components based on time series data. The performance on literature case studies of SVM regression is measured against other advanced learning methods such as the Radial Basis Function, the traditional MultiLayer Perceptron model, Box-Jenkins autoregressive-integrated-moving average and the Infinite Impulse Response Locally Recurrent Neural Networks. The comparison shows that in the analyzed cases, SVM outperforms or is comparable to other techniques. - Highlights: → Realistic modeling of reliability demands complex mathematical formulations. → SVM is proper when the relation input/output is unknown or very costly to be obtained. → Results indicate the potential of SVM for reliability time series prediction. → Reliability estimates support the establishment of adequate maintenance strategies.

  2. Corrosion induced failure analysis of subsea pipelines

    International Nuclear Information System (INIS)

    Yang, Yongsheng; Khan, Faisal; Thodi, Premkumar; Abbassi, Rouzbeh

    2017-01-01

    Pipeline corrosion is one of the main causes of subsea pipeline failure. It is necessary to monitor and analyze pipeline condition to effectively predict likely failure. This paper presents an approach to analyze the observed abnormal events to assess the condition of subsea pipelines. First, it focuses on establishing a systematic corrosion failure model by Bow-Tie (BT) analysis, and subsequently the BT model is mapped into a Bayesian Network (BN) model. The BN model facilitates the modelling of interdependency of identified corrosion causes, as well as the updating of failure probabilities depending on the arrival of new information. Furthermore, an Object-Oriented Bayesian Network (OOBN) has been developed to better structure the network and to provide an efficient updating algorithm. Based on this OOBN model, probability updating and probability adaptation are performed at regular intervals to estimate the failure probabilities due to corrosion and potential consequences. This results in an interval-based condition assessment of subsea pipeline subjected to corrosion. The estimated failure probabilities would help prioritize action to prevent and control failures. Practical application of the developed model is demonstrated using a case study. - Highlights: • A Bow-Tie (BT) based corrosion failure model linking causation with the potential losses. • A novel Object-Oriented Bayesian Network (OOBN) based corrosion failure risk model. • Probability of failure updating and adaptation with respect to time using OOBN model. • Application of the proposed model to develop and test strategies to minimize failure risk.

  3. Prediction of failure in tube hydrofonning process using a damage model

    International Nuclear Information System (INIS)

    Majzoobi, G. H.; Saniee, F. Freshteh; Shirazi, A.

    2007-01-01

    In tube hydroforming process (THP), two types of loading, internal pressure and axial feeding and in particular the combination of them, are needed to feed the material into the cavities of the die to form the workpiece into the desired shape. If the variation of pressure versus axial feeding is not determined properly, the workpiece may be buckled, wrinkled or burst during THP. The appropriate variation is normally determined by experiment which is expensive and time-consuming. In this work, numerical simulation using Johnson-Cook models for predicting the elasto-plastic response and the failure of the material are employed to obtain the best combination of internal pressure and axial feeding. The numerical simulations are examined by a number of experiments conducted in the present investigation. The results show very close agreement between the numerical simulations and the experiments, suggesting that the numerical simulations using Johnson-Cook material and failure models provide a valuable tool to examine the different parameters involved in THP

  4. Reaction Times to Consecutive Automation Failures: A Function of Working Memory and Sustained Attention.

    Science.gov (United States)

    Jipp, Meike

    2016-12-01

    This study explored whether working memory and sustained attention influence cognitive lock-up, which is a delay in the response to consecutive automation failures. Previous research has demonstrated that the information that automation provides about failures and the time pressure that is associated with a task influence cognitive lock-up. Previous research has also demonstrated considerable variability in cognitive lock-up between participants. This is why individual differences might influence cognitive lock-up. The present study tested whether working memory-including flexibility in executive functioning-and sustained attention might be crucial in this regard. Eighty-five participants were asked to monitor automated aircraft functions. The experimental manipulation consisted of whether or not an initial automation failure was followed by a consecutive failure. Reaction times to the failures were recorded. Participants' working-memory and sustained-attention abilities were assessed with standardized tests. As expected, participants' reactions to consecutive failures were slower than their reactions to initial failures. In addition, working-memory and sustained-attention abilities enhanced the speed with which participants reacted to failures, more so with regard to consecutive than to initial failures. The findings highlight that operators with better working memory and sustained attention have small advantages when initial failures occur, but their advantages increase across consecutive failures. The results stress the need to consider personnel selection strategies to mitigate cognitive lock-up in general and training procedures to enhance the performance of low ability operators. © 2016, Human Factors and Ergonomics Society.

  5. Real time failure detection in unreinforced cementitious composites with triboluminescent sensor

    International Nuclear Information System (INIS)

    Olawale, David O.; Kliewer, Kaitlyn; Okoye, Annuli; Dickens, Tarik J.; Uddin, Mohammed J.; Okoli, Okenwa I.

    2014-01-01

    The in-situ triboluminescent optical fiber (ITOF) sensor has an integrated sensing and transmission component that converts the energy from damage events like impacts and crack propagation into optical signals that are indicative of the magnitude of damage in composite structures like concrete bridges. Utilizing the triboluminescence (TL) property of ZnS:Mn, the ITOF sensor has been successfully integrated into unreinforced cementitious composite beams to create multifunctional smart structures with in-situ failure detection capabilities. The fabricated beams were tested under flexural loading, and real time failure detection was made by monitoring the TL signals generated by the integrated ITOF sensor. Tested beam samples emitted distinctive TL signals at the instance of failure. In addition, we report herein a new and promising approach to damage characterization using TL emission profiles. Analysis of TL emission profiles indicates that the ITOF sensor responds to crack propagation through the beam even when not in contact with the crack. Scanning electron microscopy analysis indicated that fracto-triboluminescence was responsible for the TL signals observed at the instance of beam failure. -- Highlights: • Developed a new approach to triboluminescence (TL)-based sensing with ZnS:Mn. • Damage-induced excitation of ZnS:Mn enabled real time damage detection in composite. • Based on sensor position, correlation exists between TL signal and failure stress. • Introduced a new approach to damage characterization with TL profile analysis

  6. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  7. Cardiac dysfunction in heart failure: the cardiologist's love affair with time.

    Science.gov (United States)

    Brutsaert, Dirk L

    2006-01-01

    Translating research into clinical practice has been a challenge throughout medical history. From the present review, it should be clear that this is particularly the case for heart failure. As a consequence, public awareness of this disease has been disillusionedly low, despite its prognosis being worse than that of most cancers and many other chronic diseases. We explore how over the past 150 years since Ludwig and Marey concepts about the evaluation of cardiac performance in patients with heart failure have emerged. From this historical-physiologic perspective, we have seen how 3 increasingly reductionist approaches or schools of thought have evolved in parallel, that is, an input-output approach, a hemodynamic pump approach, and a muscular pump approach. Each one of these has provided complementary insights into the pathophysiology of heart failure and has resulted in measurements or derived indices, some of which still being in use in present-day cardiology. From the third, most reductionist muscular pump approach, we have learned that myocardial and ventricular relaxation properties as well as temporal and spatial nonuniformities have been largely overlooked in the 2 other, input-output and hemodynamic pump, approaches. A key message from the present review is that relaxation and nonuniformities can be fully understood only from within the time-space continuum of cardiac pumping. As cyclicity and rhythm are, in some way, the most basic aspects of cardiac function, considerations of time should dominate over any measurement of cardiac performance as a muscular pump. Any measurement that is blind for the arrow of cardiac time should therefore be interpreted with caution. We have seen how the escape from the time domain-as with the calculation of LV ejection fraction-fascinating though as it may be, has undoubtedly served to hinder a rational scientific debate on the recent, so-called systolic-diastolic heart failure controversy. Lacking appreciation of early

  8. Effect of therapy switch on time to second-line antiretroviral treatment failure in HIV-infected patients.

    Science.gov (United States)

    Häggblom, Amanda; Santacatterina, Michele; Neogi, Ujjwal; Gisslen, Magnus; Hejdeman, Bo; Flamholc, Leo; Sönnerborg, Anders

    2017-01-01

    Switch from first line antiretroviral therapy (ART) to second-line ART is common in clinical practice. However, there is limited knowledge of to which extent different reason for therapy switch are associated with differences in long-term consequences and sustainability of the second line ART. Data from 869 patients with 14601 clinical visits between 1999-2014 were derived from the national cohort database. Reason for therapy switch and viral load (VL) levels at first-line ART failure were compared with regard to outcome of second line ART. Using the Laplace regression model we analyzed the median, 10th, 20th, 30th and 40th percentile of time to viral failure (VF). Most patients (n = 495; 57.0%) switched from first-line to second-line ART without VF. Patients switching due to detectable VL with (n = 124; 14.2%) or without drug resistance mutations (DRM) (n = 250; 28.8%) experienced VF to their second line regimen sooner (median time, years: 3.43 (95% CI 2.90-3.96) and 3.20 (95% 2.65-3.75), respectively) compared with those who switched without VF (4.53 years). Furthermore level of VL at first-line ART failure had a significant impact on failure of second-line ART starting after 2.5 years of second-line ART. In the context of life-long therapy, a median time on second line ART of 4.53 years for these patients is short. To prolong time on second-line ART, further studies are needed on the reasons for therapy changes. Additionally patients with a high VL at first-line VF should be more frequently monitored the period after the therapy switch.

  9. Effect of therapy switch on time to second-line antiretroviral treatment failure in HIV-infected patients.

    Directory of Open Access Journals (Sweden)

    Amanda Häggblom

    Full Text Available Switch from first line antiretroviral therapy (ART to second-line ART is common in clinical practice. However, there is limited knowledge of to which extent different reason for therapy switch are associated with differences in long-term consequences and sustainability of the second line ART.Data from 869 patients with 14601 clinical visits between 1999-2014 were derived from the national cohort database. Reason for therapy switch and viral load (VL levels at first-line ART failure were compared with regard to outcome of second line ART. Using the Laplace regression model we analyzed the median, 10th, 20th, 30th and 40th percentile of time to viral failure (VF.Most patients (n = 495; 57.0% switched from first-line to second-line ART without VF. Patients switching due to detectable VL with (n = 124; 14.2% or without drug resistance mutations (DRM (n = 250; 28.8% experienced VF to their second line regimen sooner (median time, years: 3.43 (95% CI 2.90-3.96 and 3.20 (95% 2.65-3.75, respectively compared with those who switched without VF (4.53 years. Furthermore level of VL at first-line ART failure had a significant impact on failure of second-line ART starting after 2.5 years of second-line ART.In the context of life-long therapy, a median time on second line ART of 4.53 years for these patients is short. To prolong time on second-line ART, further studies are needed on the reasons for therapy changes. Additionally patients with a high VL at first-line VF should be more frequently monitored the period after the therapy switch.

  10. Systematic evaluation of fault trees using real-time model checker UPPAAL

    International Nuclear Information System (INIS)

    Cha, Sungdeok; Son, Hanseong; Yoo, Junbeom; Jee, Eunkyung; Seong, Poong Hyun

    2003-01-01

    Fault tree analysis, the most widely used safety analysis technique in industry, is often applied manually. Although techniques such as cutset analysis or probabilistic analysis can be applied on the fault tree to derive further insights, they are inadequate in locating flaws when failure modes in fault tree nodes are incorrectly identified or when causal relationships among failure modes are inaccurately specified. In this paper, we demonstrate that model checking technique is a powerful tool that can formally validate the accuracy of fault trees. We used a real-time model checker UPPAAL because the system we used as the case study, nuclear power emergency shutdown software named Wolsong SDS2, has real-time requirements. By translating functional requirements written in SCR-style tabular notation into timed automata, two types of properties were verified: (1) if failure mode described in a fault tree node is consistent with the system's behavioral model; and (2) whether or not a fault tree node has been accurately decomposed. A group of domain engineers with detailed technical knowledge of Wolsong SDS2 and safety analysis techniques developed fault tree used in the case study. However, model checking technique detected subtle ambiguities present in the fault tree

  11. Asymptotic behavior of total times For jobs that must start over if a failure occurs

    DEFF Research Database (Denmark)

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures to the total time when...... including failures, for any failure distribution. In particular, we show that if the task time distribution has an unbounded support then the total time distribution H is always heavy-tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  12. Asymptotic behaviour of total times for jobs that must start over if a failure occurs

    DEFF Research Database (Denmark)

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    2008-01-01

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures and the total time when...... including failures, for any failure distribution. In particular, we show that if the task-time distribution has an unbounded support, then the total-time distribution H is always heavy tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  13. Study on real-time elevator brake failure predictive system

    Science.gov (United States)

    Guo, Jun; Fan, Jinwei

    2013-10-01

    This paper presented a real-time failure predictive system of the elevator brake. Through inspecting the running state of the coil by a high precision long range laser triangulation non-contact measurement sensor, the displacement curve of the coil is gathered without interfering the original system. By analyzing the displacement data using the diagnostic algorithm, the hidden danger of the brake system can be discovered in time and thus avoid the according accident.

  14. A Study on Estimating the Next Failure Time of Compressor Equipment in an Offshore Plant

    Directory of Open Access Journals (Sweden)

    SangJe Cho

    2016-01-01

    Full Text Available The offshore plant equipment usually has a long life cycle. During its O&M (Operation and Maintenance phase, since the accidental occurrence of offshore plant equipment causes catastrophic damage, it is necessary to make more efforts for managing critical offshore equipment. Nowadays, due to the emerging ICTs (Information Communication Technologies, it is possible to send health monitoring information to administrator of an offshore plant, which leads to much concern on CBM (Condition-Based Maintenance. This study introduces three approaches for predicting the next failure time of offshore plant equipment (gas compressor with case studies, which are based on finite state continuous time Markov model, linear regression method, and their hybrid model.

  15. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    Science.gov (United States)

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. A cascading failure model for analyzing railway accident causation

    Science.gov (United States)

    Liu, Jin-Tao; Li, Ke-Ping

    2018-01-01

    In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.

  17. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  18. A time-dependent event tree technique for modelling recovery operations

    International Nuclear Information System (INIS)

    Kohut, P.; Fitzpatrick, R.

    1991-01-01

    The development of a simplified time dependent event tree methodology is presented. The technique is especially applicable to describe recovery operations in nuclear reactor accident scenarios initiated by support system failures. The event tree logic is constructed using time dependent top events combined with a damage function that contains information about the final state time behavior of the reactor core. Both the failure and the success states may be utilized for the analysis. The method is illustrated by modeling the loss of service water function with special emphasis on the RCP [reactor coolant pump] seal LOCA [loss of coolant accident] scenario. 5 refs., 2 figs., 2 tabs

  19. Plan on test to failure of a prestressed concrete containment vessel model

    International Nuclear Information System (INIS)

    Takumi, K.; Nonaka, A.; Umeki, K.; Nagata, K.; Soejima, M.; Yamaura, Y.; Costello, J.F.; Riesemann, W.A. von.; Parks, M.B.; Horschel, D.S.

    1992-01-01

    A summary of the plans to test a prestressed concrete containment vessel (PCCV) model to failure is provided in this paper. The test will be conducted as a part of a joint research program between the Nuclear Power Engineering Corporation (NUPEC), the United States Nuclear Regulatory Commission (NRC), and Sandia National Laboratories (SNL). The containment model will be a scaled representation of a PCCV for a pressurized water reactor (PWR). During the test, the model will be slowly pressurized internally until failure of the containment pressure boundary occurs. The objectives of the test are to measure the failure pressure, to observe the mode of failure, and to record the containment structural response up to failure. Pre- and posttest analyses will be conducted to forecast and evaluate the test results. Based on these results, a validated method for evaluating the structural behavior of an actual PWR PCCV will be developed. The concepts to design the PCCV model are also described in the paper

  20. Routine maintenance prolongs ESP time between failures

    International Nuclear Information System (INIS)

    Hurst, T.; Lannom, R.W.; Divine, D.L.

    1992-01-01

    This paper reports that routine maintenance of electric submersible motors (ESPs) significantly lengthened the mean time between motor failures (MTBF), decreased operating costs, and extended motor run life in the Sacroc Unit of the Kelly-Snyder field in West Texas. After the oil price boom of the early 1980s. rapidly eroding profit margins from producing properties caused a much stronger focus on reducing operating costs. In Sacroc, ESP operating life and repair costs became a major target of cost reduction efforts. The routine ESP maintenance program has been in place for over 3 years

  1. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  2. Failure diagnosis using discrete event models

    International Nuclear Information System (INIS)

    Sampath, M.; Sengupta, R.; Lafortune, S.; Teneketzis, D.; Sinnamohideen, K.

    1994-01-01

    We propose a Discrete Event Systems (DES) approach to the failure diagnosis problem. We present a methodology for modeling physical systems in a DES framework. We discuss the notion of diagnosability and present the construction procedure of the diagnoser. Finally, we illustrate our approach using a Heating, Ventilation and Air Conditioning (HVAC) system

  3. Computational modeling for hexcan failure under core distruptive accidental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sawada, T.; Ninokata, H.; Shimizu, A. [Tokyo Institute of Technology (Japan)

    1995-09-01

    This paper describes the development of computational modeling for hexcan wall failures under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH has been analyzed by using the SIMMER-II code. The SIMBATH experiments were performed at KfK in Germany. The experiments used a thermite mixture to simulate fuel. The test geometry of SIMBATH ranged from single pin to 37-pin bundles. In this study, phenomena of hexcan wall failure found in a SIMBATH test were analyzed by SIMMER-II. Although the original model of SIMMER-II did not calculate any hexcan failure, several simple modifications made it possible to reproduce the hexcan wall melt-through observed in the experiment. In this paper the modifications and their significance are discussed for further modeling improvements.

  4. Modelling of Diffuse Failure and Fluidization in geo materials and Geo structures

    International Nuclear Information System (INIS)

    Pastor, M.

    2013-01-01

    Failure of geo structures is caused by changes in effective stresses induced by external loads (earthquakes, for instance), change in the pore pressures (rain), in the geometry (erosion), or in materials properties (chemical attack, degradation, weathering). Landslides can by analysed as the failure of a geo structure, the slope. There exist many alternative classifications of landslides can be analyzed as the failure of a geo structure, the slope. There exist many alternative classifications of landslides, but we will consider here a simple classification into slides and flows. In the case of slides, the failure consists on the movement of a part of the slope with deformations which concentrate in a narrow zone, the failure surface. This can be idealized as localized failure, and it is typical of over consolidated or dense materials exhibiting softening. On the other hand, flows are made of fluidized materials, flowing in a fluid like manner. This mechanism of failure is known as diffuse failure, and has received much less attention by researchers. Modelling of diffuse failure of slopes is complex, because there appear difficulties in the mathematical, constitutive and numerical models, which have to account for a phase transition. This work deals with modeling, and we will present here some tools recently developed by the author and the group to which he belongs. (Author)

  5. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  6. Measuring Time to Biochemical Failure in the TROG 96.01 Trial: When Should the Clock Start Ticking?

    International Nuclear Information System (INIS)

    Denham, James W.; Steigler, Allison; Kumar, Mahesh; Lamb, David S.; Joseph, David; Spry, Nigel A.; Tai, Keen-Hun; Atkinson, Chris; Turner, Sandra FRANZCR; Greer, Peter B.; Gleeson, Paul S.; D'Este, Catherine

    2009-01-01

    Purpose: We sought to determine whether short-term neoadjuvant androgen deprivation (STAD) duration influences the optimal time point from which Phoenix fail (time to biochemical failure; TTBF) should be measured. Methods and Materials: In the Trans-Tasman Radiation Oncology Group 96.01 trial, men with locally advanced prostate cancer were randomized to 3 or 6 months STAD before and during prostatic irradiation (XRT) or to XRT alone. The prognostic value of TTBF measured from the end of radiation (ERT) and randomization were compared using Cox models. Results: Between 1996 and 2000, 802 eligible patients were randomized. In 436 men with Phoenix failure, TTBF measured from randomization was a powerful predictor of prostate cancer-specific survival and marginally more accurate than TTBF measured from ERT in Cox models. Insufficient data were available to confirm that TTBF measured from testosterone recovery may also be a suitable option. Conclusions: TTBF measured from randomization (commencement of therapy) performed well in this trial dataset and will be a convenient option if this finding holds in other datasets that include long-term androgen deprivation data.

  7. Light water reactor lower head failure analysis

    International Nuclear Information System (INIS)

    Rempe, J.L.; Chavez, S.A.; Thinnes, G.L.

    1993-10-01

    This document presents the results from a US Nuclear Regulatory Commission-sponsored research program to investigate the mode and timing of vessel lower head failure. Major objectives of the analysis were to identify plausible failure mechanisms and to develop a method for determining which failure mode would occur first in different light water reactor designs and accident conditions. Failure mechanisms, such as tube ejection, tube rupture, global vessel failure, and localized vessel creep rupture, were studied. Newly developed models and existing models were applied to predict which failure mechanism would occur first in various severe accident scenarios. So that a broader range of conditions could be considered simultaneously, calculations relied heavily on models with closed-form or simplified numerical solution techniques. Finite element techniques-were employed for analytical model verification and examining more detailed phenomena. High-temperature creep and tensile data were obtained for predicting vessel and penetration structural response

  8. Light water reactor lower head failure analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rempe, J.L.; Chavez, S.A.; Thinnes, G.L. [EG and G Idaho, Inc., Idaho Falls, ID (United States)] [and others

    1993-10-01

    This document presents the results from a US Nuclear Regulatory Commission-sponsored research program to investigate the mode and timing of vessel lower head failure. Major objectives of the analysis were to identify plausible failure mechanisms and to develop a method for determining which failure mode would occur first in different light water reactor designs and accident conditions. Failure mechanisms, such as tube ejection, tube rupture, global vessel failure, and localized vessel creep rupture, were studied. Newly developed models and existing models were applied to predict which failure mechanism would occur first in various severe accident scenarios. So that a broader range of conditions could be considered simultaneously, calculations relied heavily on models with closed-form or simplified numerical solution techniques. Finite element techniques-were employed for analytical model verification and examining more detailed phenomena. High-temperature creep and tensile data were obtained for predicting vessel and penetration structural response.

  9. High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture

    Science.gov (United States)

    2017-11-22

    High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating

  10. Wood-adhesive bonding failure : modeling and simulation

    Science.gov (United States)

    Zhiyong Cai

    2010-01-01

    The mechanism of wood bonding failure when exposed to wet conditions or wet/dry cycles is not fully understood and the role of the resulting internal stresses exerted upon the wood-adhesive bondline has yet to be quantitatively determined. Unlike previous modeling this study has developed a new two-dimensional internal-stress model on the basis of the mechanics of...

  11. Most Probable Failures in LHC Magnets and Time Constants of their Effects on the Beam.

    CERN Document Server

    Gomez Alonso, Andres

    2006-01-01

    During the LHC operation, energies up to 360 MJ will be stored in each proton beam and over 10 GJ in the main electrical circuits. With such high energies, beam losses can quickly lead to important equipment damage. The Machine Protection Systems have been designed to provide reliable protection of the LHC through detection of the failures leading to beam losses and fast dumping of the beams. In order to determine the protection strategies, it is important to know the time constants of the failure effects on the beam. In this report, we give an estimation of the time constants of quenches and powering failures in LHC magnets. The most critical failures are powering failures in certain normal conducting circuits, leading to relevant effects on the beam in ~1 ms. The failures on super conducting magnets leading to fastest losses are quenches. In this case, the effects on the beam can be signficant ~10 ms after the quench occurs.

  12. ARRA: Reconfiguring Power Systems to Minimize Cascading Failures - Models and Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Dobson, Ian [Iowa State University; Hiskens, Ian [Unversity of Michigan; Linderoth, Jeffrey [University of Wisconsin-Madison; Wright, Stephen [University of Wisconsin-Madison

    2013-12-16

    Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines

  13. Multi-scale modeling of ductile failure in metallic alloys

    International Nuclear Information System (INIS)

    Pardoen, Th.; Scheyvaerts, F.; Simar, A.; Tekoglu, C.; Onck, P.R.

    2010-01-01

    Micro-mechanical models for ductile failure have been developed in the seventies and eighties essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale. (authors)

  14. Statistical study on applied stress dependence of failure time in stress corrosion cracking of Zircaloy-4 alloy

    International Nuclear Information System (INIS)

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi; Tanaka, Akiei.

    1988-01-01

    Effects of applied stress on failure time in stress corrosion cracking of Zircaloy-4 alloy were investigated by Weibull distribution method. Test pieces in the evaculated silica tubes were annealed at 1,073 K for 7.2 x 10 3 s, and then quenched into ice-water. These species under constant applied stresses of 40∼90 % yield stress were immersed in CH 3 OH-1 w% I 2 solution at room temperature. The probability distribution of failure times under applied stress of 40 % of yield stress was described as single Weibull distribution, which had one shape parameter. The probability distributions of failure times under applied stress above 60 % of yield stress were described as composite and mixed Weibull distributions, which had the two shape parameters of Weibull distributions for the regions of the shorter time and longer one of failure. The values of these shape parameters in this study were larger than the value of 1 which corresponded to that of wear out failure. The observation of fracture surfaces and the stress dependence of the shape parameters indicated that the shape parameters both for the times of failure under 40 % of yield stress and for the longer ones above 60 % of yield stress corresponded to intergranular cracking, and that for shorter times of failure corresponded to transgranular cracking and dimple fracture. (author)

  15. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development

    Directory of Open Access Journals (Sweden)

    Shanee Honig

    2018-06-01

    Full Text Available While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI, and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI, human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP, that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1 communicating failures, (2 perception and comprehension of failures, and (3 solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a

  16. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    International Nuclear Information System (INIS)

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  17. Patient-Specific Tailored Intervention Improves INR Time in Therapeutic Range and INR Variability in Heart Failure Patients.

    Science.gov (United States)

    Gotsman, Israel; Ezra, Orly; Hirsh Raccah, Bruria; Admon, Dan; Lotan, Chaim; Dekeyser Ganz, Freda

    2017-08-01

    Many patients with heart failure need anticoagulants, including warfarin. Good control is particularly challenging in heart failure patients, with range, thereby increasing the risk of complications. This study aimed to evaluate the effect of a patient-specific tailored intervention on anticoagulation control in patients with heart failure. Patients with heart failure taking warfarin therapy (n = 145) were randomized to either standard care or a 1-time intervention assessing potential risk factors for lability of INR, in which they received patient-specific instructions. Time in therapeutic range (TTR) using Rosendaal's linear model was assessed 3 months before and after the intervention. The patient-tailored intervention significantly increased anticoagulation control. The median TTR levels before intervention were suboptimal in the interventional and control groups (53% vs 45%, P = .14). After intervention the median TTR increased significantly in the interventional group compared with the control group (80% [interquartile range, 62%-93%] vs 44% [29%-61%], P <.0001). The intervention resulted in a significant improvement in the interventional group before versus after intervention (53% vs 80%, P <.0001) but not in the control group (45% vs 44%, P = .95). The percentage of patients with a TTR ≥60%, considered therapeutic, was substantially higher in the interventional group: 79% versus 25% (P <.0001). The INR variability (standard deviation of each patient's INR measurements) decreased significantly in the interventional group, from 0.53 to 0.32 (P <.0001) after intervention but not in the control group. Patient-specific tailored intervention significantly improves anticoagulation therapy in patients with heart failure. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. An Integrated Model to Predict Corporate Failure of Listed Companies in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Nisansala Wijekoon

    2015-07-01

    Full Text Available The primary objective of this study is to develop an integrated model to predict corporate failure of listed companies in Sri Lanka. The logistic regression analysis was employed to a data set of 70 matched-pairs of failed and non-failed companies listed in the Colombo Stock Exchange (CSE in Sri Lanka over the period 2002 to 2010. A total of fifteen financial ratios and eight corporate governance variables were used as predictor variables of corporate failure. Analysis of the statistical testing results indicated that model consists with both corporate governance variables and financial ratios improved the prediction accuracy to reach 88.57 per cent one year prior to failure. Furthermore, predictive accuracy of this model in all three years prior to failure is above 80 per cent. Hence model is robust in obtaining accurate results for up to three years prior to failure. It was further found that two financial ratios, working capital to total assets and cash flow from operating activities to total assets, and two corporate governance variables, outside director ratio and company audit committee are having more explanatory power to predict corporate failure. Therefore, model developed in this study can assist investors, managers, shareholders, financial institutions, auditors and regulatory agents in Sri Lanka to forecast corporate failure of listed companies.

  19. [Predictors factors for the extubation failure in two or more times among preterm newborn].

    Science.gov (United States)

    Tapia-Rombo, Carlos Antonio; De León-Gómez, Noé; Ballesteros-Del-Olmo, Julio César; Ruelas-Vargas, Consuelo; Cuevas-Urióstegui, María Luisa; Castillo-Pérez, José Juan

    2010-01-01

    With the ventilatory mechanical attendance has been prolonged the life of the preterm newborn (PTNB) critically sick and during that lapse many occasions it is necessary reintubation to PTNB in two or more times with the subsequent damage that makes enter to the patient to a vicious circle with more damage during the same reintubated. The objective of this study was to determine the factors that predict the extubation failure among PTNB from 28 to 36 weeks of gestational age in two or more times. It was considered extubation failure when in the first 72 hours of being had extubated the patient; there was reintubation necessity, independent of the cause that originated it. For the second extubation or more took the same approach. During the period of September to December of the 2004 were included in retrospective study to all PTNB that were interned in one hospital of third level that fulfilled the inclusion approaches (one study published where we take account the first extubation failure) and in retrolective study to the patients of the same hospital of January to October of the 2006. They were formed two groups, group A of cases (who failed in extubation two or more times) and the B of controls (who failed in extubation for the first time). The descriptive statistic and the inferential through of Student t test or Mann-Whitney U or rank sum test Wilcoxon, in suitable case; Chi-square or Fisher's exact test was used. Odds ratio (OR) and multivariate analysis for to study predictors factors for the extubation failure was employed. Statistical significance was considered at p 2, OR 5.3, IC to 95% of 1.3-21.4 (P = 0.02). In the bronchoscopy study they were some anatomical alterations that they explained the extubation failure in the second time. We conclude that it is important to plan an extubation in the PTNB, when there has already been a previous failure, and to avoid the well-known predictors factors for extubation failure as much as possible in the extubation

  20. A Validation Study of the Rank-Preserving Structural Failure Time Model: Confidence Intervals and Unique, Multiple, and Erroneous Solutions.

    Science.gov (United States)

    Ouwens, Mario; Hauch, Ole; Franzén, Stefan

    2018-05-01

    The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.

  1. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    Science.gov (United States)

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  2. Revised risk priority number in failure mode and effects analysis model from the perspective of healthcare system

    Directory of Open Access Journals (Sweden)

    Fatemeh Rezaei

    2018-01-01

    Full Text Available Background: Methodology of Failure Mode and Effects Analysis (FMEA is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1 defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss, occurrence of incident (time - unit occurrence and degree of exposure to risk and preventability (degree of preventability and defensive barriers then, (2 risks priority criteria quantified by using RPN index (361 for the highest rate failure. The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models.

  3. Dynamic failure of dry and fully saturated limestone samples based on incubation time concept

    Directory of Open Access Journals (Sweden)

    Yuri V. Petrov

    2017-02-01

    Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.

  4. Cap plasticity models and compactive and dilatant pre-failure deformation

    International Nuclear Information System (INIS)

    Fossum, Arlo F.; Fredrich, Joanne T.

    2000-01-01

    At low mean stresses, porous geomaterials fail by shear localization, and at higher mean stresses, they undergo strain-hardening behavior. Cap plasticity models attempt to model this behavior using a pressure-dependent shear yield and/or shear limit-state envelope with a hardening or hardening/softening elliptical end cap to define pore collapse. While these traditional models describe compactive yield and ultimate shear failure, difficulties arise when the behavior involves a transition from compactive to dilatant deformation that occurs before the shear failure or limit-state shear stress is reached. In this work, a continuous surface cap plasticity model is used to predict compactive and dilatant pre-failure deformation. During loading the stress point can pass freely through the critical state point separating compactive from dilatant deformation. The predicted volumetric strain goes from compactive to dilatant without the use of a non-associated flow rule. The new model is stable in that Drucker's stability postulates are satisfied. The study has applications to several geosystems of current engineering interest (oil and gas reservoirs, nuclear waste repositories, buried targets, and depleted reservoirs for possible use for subsurface sequestration of greenhouse gases)

  5. Failure Forecasting in Triaxially Stressed Sandstones

    Science.gov (United States)

    Crippen, A.; Bell, A. F.; Curtis, A.; Main, I. G.

    2017-12-01

    Precursory signals to fracturing events have been observed to follow power-law accelerations in spatial, temporal, and size distributions leading up to catastrophic failure. In previous studies this behavior was modeled using Voight's relation of a geophysical precursor in order to perform `hindcasts' by solving for failure onset time. However, performing this analysis in retrospect creates a bias, as we know an event happened, when it happened, and we can search data for precursors accordingly. We aim to remove this retrospective bias, thereby allowing us to make failure forecasts in real-time in a rock deformation laboratory. We triaxially compressed water-saturated 100 mm sandstone cores (Pc= 25MPa, Pp = 5MPa, σ = 1.0E-5 s-1) to the point of failure while monitoring strain rate, differential stress, AEs, and continuous waveform data. Here we compare the current `hindcast` methods on synthetic and our real laboratory data. We then apply these techniques to increasing fractions of the data sets to observe the evolution of the failure forecast time with precursory data. We discuss these results as well as our plan to mitigate false positives and minimize errors for real-time application. Real-time failure forecasting could revolutionize the field of hazard mitigation of brittle failure processes by allowing non-invasive monitoring of civil structures, volcanoes, and possibly fault zones.

  6. Failure Analysis of Nonvolatile Residue (NVR) Analyzer Model SP-1000

    Science.gov (United States)

    Potter, Joseph C.

    2011-01-01

    National Aeronautics and Space Administration (NASA) subcontractor Wiltech contacted the NASA Electrical Lab (NE-L) and requested a failure analysis of a Solvent Purity Meter; model SP-IOOO produced by the VerTis Instrument Company. The meter, used to measure the contaminate in a solvent to determine the relative contamination on spacecraft flight hardware and ground servicing equipment, had been inoperable and in storage for an unknown amount of time. NE-L was asked to troubleshoot the unit and make a determination on what may be required to make the unit operational. Through the use of general troubleshooting processes and the review of a unit in service at the time of analysis, the unit was found to be repairable but would need the replacement of multiple components.

  7. Performance deterioration modeling and optimal preventive maintenance strategy under scheduled servicing subject to mission time

    Directory of Open Access Journals (Sweden)

    Li Dawei

    2014-08-01

    Full Text Available Servicing is applied periodically in practice with the aim of restoring the system state and prolonging the lifetime. It is generally seen as an imperfect maintenance action which has a chief influence on the maintenance strategy. In order to model the maintenance effect of servicing, this study analyzes the deterioration characteristics of system under scheduled servicing. And then the deterioration model is established from the failure mechanism by compound Poisson process. On the basis of the system damage value and failure mechanism, the failure rate refresh factor is proposed to describe the maintenance effect of servicing. A maintenance strategy is developed which combines the benefits of scheduled servicing and preventive maintenance. Then the optimization model is given to determine the optimal servicing period and preventive maintenance time, with an objective to minimize the system expected life-cycle cost per unit time and a constraint on system survival probability for the duration of mission time. Subject to mission time, it can control the ability of accomplishing the mission at any time so as to ensure the high dependability. An example of water pump rotor relating to scheduled servicing is introduced to illustrate the failure rate refresh factor and the proposed maintenance strategy. Compared with traditional methods, the numerical results show that the failure rate refresh factor can describe the maintenance effect of servicing more intuitively and objectively. It also demonstrates that this maintenance strategy can prolong the lifetime, reduce the total lifetime maintenance cost and guarantee the dependability of system.

  8. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  9. Identification of Modeling Approaches To Support Common-Cause Failure Analysis

    International Nuclear Information System (INIS)

    Korsah, Kofi; Wood, Richard Thomas

    2015-01-01

    Experience with applying current guidance and practices for common-cause failure (CCF) mitigation to digital instrumentation and control (I&C) systems has proven problematic, and the regulatory environment has been unpredictable. The impact of CCF vulnerability is to inhibit I&C modernization and, thereby, challenge the long-term sustainability of existing plants. For new plants and advanced reactor concepts, the issue of CCF vulnerability for highly integrated digital I&C systems imposes a design burden resulting in higher costs and increased complexity. The regulatory uncertainty regarding which mitigation strategies are acceptable (e.g., what diversity is needed and how much is sufficient) drives designers to adopt complicated, costly solutions devised for existing plants. The conditions that constrain the transition to digital I&C technology by the U.S. nuclear industry require crosscutting research to resolve uncertainty, demonstrate necessary characteristics, and establish an objective basis for qualification of digital technology for usage in Nuclear Power Plant (NPP) I&C applications. To fulfill this research need, Oak Ridge National Laboratory is conducting an investigation into mitigation of CCF vulnerability for nuclear-qualified applications. The outcome of this research is expected to contribute to a fundamentally sound, comprehensive technical basis for establishing the qualification of digital technology for nuclear power applications. This report documents the investigation of modeling approaches for representing failure of I&C systems. Failure models are used when there is a need to analyze how the probability of success (or failure) of a system depends on the success (or failure) of individual elements. If these failure models are extensible to represent CCF, then they can be employed to support analysis of CCF vulnerabilities and mitigation strategies. Specifically, the research findings documented in this report identify modeling approaches that

  10. Identification of Modeling Approaches To Support Common-Cause Failure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Korsah, Kofi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wood, Richard Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-06-01

    Experience with applying current guidance and practices for common-cause failure (CCF) mitigation to digital instrumentation and control (I&C) systems has proven problematic, and the regulatory environment has been unpredictable. The impact of CCF vulnerability is to inhibit I&C modernization and, thereby, challenge the long-term sustainability of existing plants. For new plants and advanced reactor concepts, the issue of CCF vulnerability for highly integrated digital I&C systems imposes a design burden resulting in higher costs and increased complexity. The regulatory uncertainty regarding which mitigation strategies are acceptable (e.g., what diversity is needed and how much is sufficient) drives designers to adopt complicated, costly solutions devised for existing plants. The conditions that constrain the transition to digital I&C technology by the U.S. nuclear industry require crosscutting research to resolve uncertainty, demonstrate necessary characteristics, and establish an objective basis for qualification of digital technology for usage in Nuclear Power Plant (NPP) I&C applications. To fulfill this research need, Oak Ridge National Laboratory is conducting an investigation into mitigation of CCF vulnerability for nuclear-qualified applications. The outcome of this research is expected to contribute to a fundamentally sound, comprehensive technical basis for establishing the qualification of digital technology for nuclear power applications. This report documents the investigation of modeling approaches for representing failure of I&C systems. Failure models are used when there is a need to analyze how the probability of success (or failure) of a system depends on the success (or failure) of individual elements. If these failure models are extensible to represent CCF, then they can be employed to support analysis of CCF vulnerabilities and mitigation strategies. Specifically, the research findings documented in this report identify modeling approaches that

  11. A Retrospective Study of Success, Failure, and Time Needed to Perform Awake Intubation.

    Science.gov (United States)

    Joseph, Thomas T; Gal, Jonathan S; DeMaria, Samuel; Lin, Hung-Mo; Levine, Adam I; Hyman, Jaime B

    2016-07-01

    Awake intubation is the standard of care for management of the anticipated difficult airway. The performance of awake intubation may be perceived as complex and time-consuming, potentially leading clinicians to avoid this technique of airway management. This retrospective review of awake intubations at a large academic medical center was performed to determine the average time taken to perform awake intubation, its effects on hemodynamics, and the incidence and characteristics of complications and failure. Anesthetic records from 2007 to 2014 were queried for the performance of an awake intubation. Of the 1,085 awake intubations included for analysis, 1,055 involved the use of a flexible bronchoscope. Each awake intubation case was propensity matched with two controls (1:2 ratio), with similar comorbidities and intubations performed after the induction of anesthesia (n = 2,170). The time from entry into the operating room until intubation was compared between groups. The anesthetic records of all patients undergoing awake intubation were also reviewed for failure and complications. The median time to intubation for patients intubated post induction was 16.0 min (interquartile range: 13 to 22) from entrance into the operating room. The median time to intubation for awake patients was 24.0 min (interquartile range: 19 to 31). The complication rate was 1.6% (17 of 1,085 cases). The most frequent complications observed were mucous plug, endotracheal tube cuff leak, and inadvertent extubation. The failure rate for attempted awake intubation was 1% (n = 10). Awake intubations have a high rate of success and low rate of serious complications and failure. Awake intubations can be performed safely and rapidly.

  12. A Macaca mulatta model of fulminant hepatic failure

    Institute of Scientific and Technical Information of China (English)

    Ping Zhou; Hong Bu; Jie Xia; Gang Guo; Li Li; Yu-Jun Shi; Zi-Xing Huang; Qiang Lu; Hong-Xia Li

    2012-01-01

    AIM: To establish an appropriate primate model of fulminant hepatic failure (FHF). METHODS: We have, for the first time, established a large animal model of FHF in Macaca mulatta by intraperitoneal infusion of amatoxin and endotoxin. Clinical features, biochemical indexes, histopathology and iconography were examined to dynamically investigate the progress and outcome of the animal model. RESULTS: Our results showed that the enzymes and serum bilirubin were markedly increased and the enzyme-bilirubin segregation emerged 36 h after toxin administration. Coagulation activity was significantly decreased. Gradually deteriorated parenchymal abnormality was detected by magnetic resonance imaging (MRI) and ultrasonography at 48 h. The liver biopsy showed marked hepatocyte steatosis and massive parenchymal necrosis at 36 h and 49 h, respectively. The autopsy showed typical yellow atrophy of the liver. Hepatic encephalopathy of the models was also confirmed by hepatic coma, MRI and pathological changes of cerebral edema. The lethal effects of the extrahepatic organ dysfunction were ruled out by their biochemical indices, imaging and histopathology. CONCLUSION: We have established an appropriate large primate model of FHF, which is closely similar to clinic cases, and can be used for investigation of the mechanism of FHF and for evaluation of potential medical therapies.

  13. Use on non-conjugate prior distributions in compound failure models. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.

    1981-12-01

    Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report

  14. Identification of hidden failures in control systems: a functional modelling approach

    International Nuclear Information System (INIS)

    Jalashgar, A.; Modarres, M.

    1996-01-01

    This paper presents a model which encompasses knowledge about a process control system's functionalities in a function-oriented failure analysis task. The technique called Hybrid MFM-GTST, mainly utilizes two different function - oriented methods (MFM and GTST) to identify all functions of the system components, and hence possible sources of hidden failures in process control systems. Hidden failures are referred to incipient failures within the system that in long term may lead to loss of major functions. The features of the method are described and demonstrated by using an example of a process control system

  15. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Directory of Open Access Journals (Sweden)

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  16. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  17. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    International Nuclear Information System (INIS)

    Duffy, Stephen

    2013-01-01

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  18. Bayesian analysis of repairable systems showing a bounded failure intensity

    International Nuclear Information System (INIS)

    Guida, Maurizio; Pulcini, Gianpaolo

    2006-01-01

    The failure pattern of repairable mechanical equipment subject to deterioration phenomena sometimes shows a finite bound for the increasing failure intensity. A non-homogeneous Poisson process with bounded increasing failure intensity is then illustrated and its characteristics are discussed. A Bayesian procedure, based on prior information on model-free quantities, is developed in order to allow technical information on the failure process to be incorporated into the inferential procedure and to improve the inference accuracy. Posterior estimation of the model-free quantities and of other quantities of interest (such as the optimal replacement interval) is provided, as well as prediction on the waiting time to the next failure and on the number of failures in a future time interval is given. Finally, numerical examples are given to illustrate the proposed inferential procedure

  19. NEESROCK: A Physical and Numerical Modeling Investigation of Seismically Induced Rock-Slope Failure

    Science.gov (United States)

    Applegate, K. N.; Wartman, J.; Keefer, D. K.; Maclaughlin, M.; Adams, S.; Arnold, L.; Gibson, M.; Smith, S.

    2013-12-01

    Worldwide, seismically induced rock-slope failures have been responsible for approximately 30% of the most significant landslide catastrophes of the past century. They are among the most common, dangerous, and still today, least understood of all seismic hazards. Seismically Induced Rock-Slope Failure: Mechanisms and Prediction (NEESROCK) is a major research initiative that fully integrates physical modeling (geotechnical centrifuge) and advanced numerical simulations (discrete element modeling) to investigate the fundamental mechanisms governing the stability of rock slopes during earthquakes. The research is part of the National Science Foundation-supported Network for Earthquake Engineering Simulation Research (NEES) program. With its focus on fractures and rock materials, the project represents a significant departure from the traditional use of the geotechnical centrifuge for studying soil, and pushes the boundaries of physical modeling in new directions. In addition to advancing the fundamental understanding of the rock-slope failure process under seismic conditions, the project is developing improved rock-slope failure assessment guidelines, analysis procedures, and predictive tools. Here, we provide an overview of the project, present experimental and numerical modeling results, discuss special considerations for the use of synthetic rock materials in physical modeling, and address the suitability of discrete element modeling for simulating the dynamic rock-slope failure process.

  20. Proportional and scale change models to project failures of mechanical components with applications to space station

    Science.gov (United States)

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  1. An analytical model for the ductile failure of biaxially loaded type 316 stainless steel subjected to thermal transients

    International Nuclear Information System (INIS)

    Dimelfi, R.J.

    1987-01-01

    Failure properties are calculated for the case of biaxially loaded type 316 stainless steel tubes that are heated from 300 K to near melting at various constant rates. The procedure involves combining a steady state plastic-deformation rate law with a strain hardening equation. Integrating under the condition of plastic instability gives the time and plastic strain at which ductile failure occurs for a given load. The result is presented as an analytical expression for equivalent plastic strain as a function of equivalent stress, temperature, heating rate and material constants. At large initial load, ductile fracture is calculated to occur early, at low temperatures, after very little deformation. At very small loads deformation continues for a long time to high temperatures where creep rupture mechanisms limit ductility. In the case of intermediate loads, the plastic strain accumulated before the occurrence of unstable ductile fracture is calculated. Comparison of calculated results is made with existing experimental data from pressurized tubes heated at 5.6 K/s and 111 K/s. When the effect of grain growth on creep ductility is taken into account from recrystallization data, agreement between measured and calculated uniform ductility is excellent. The general reduction in ductility and failure time that is observed at higher heating rate is explained via the model. The model provides an analytical expression for the ductility and failure time during transients for biaxially loaded type 316 stainless steel as a function of the initial temperature and load, as well as the material creep and strain hardening parameters. (orig.)

  2. ξ common cause failure model and method for defense effectiveness estimation

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1991-08-01

    Two issues have been dealt. One is to develop an event based parametric model called ξ-CCF model. Its parameters are expressed in the fraction of the progressive multiplicities of failure events. By these expressions, the contribution of each multiple failure can be presented more clearly. It can help to select defense tactics against common cause failures. The other is to provide a method which is based on the operational experience and engineering judgement to estimate the effectiveness of defense tactics. It is expressed in terms of reduction matrix for a given tactics on a specific plant in the event by event form. The application of practical example shows that the model in cooperation with the method can simply estimate the effectiveness of defense tactics. It can be easily used by the operators and its application may be extended

  3. Modelling and Verifying Communication Failure of Hybrid Systems in HCSP

    DEFF Research Database (Denmark)

    Wang, Shuling; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    Hybrid systems are dynamic systems with interacting discrete computation and continuous physical processes. They have become ubiquitous in our daily life, e.g. automotive, aerospace and medical systems, and in particular, many of them are safety-critical. For a safety-critical hybrid system......, in the presence of communication failure, the expected control from the controller will get lost and as a consequence the physical process cannot behave as expected. In this paper, we mainly consider the communication failure caused by the non-engagement of one party in communication action, i.......e. the communication itself fails to occur. To address this issue, this paper proposes a formal framework by extending HCSP, a formal modeling language for hybrid systems, for modeling and verifying hybrid systems in the absence of receiving messages due to communication failure. We present two inference systems...

  4. A New Material Constitutive Model for Predicting Cladding Failure

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Joe; Dunham, Robert [ANATECH Corp., San Diego, CA (United States); Rashid, Mark [University of California Davis, Davis, CA (United States); Machiels, Albert [EPRI, Palo Alto, CA (United States)

    2009-06-15

    An important issue in fuel performance and safety evaluations is the characterization of the effects of hydrides on cladding mechanical response and failure behavior. The hydride structure formed during power operation transforms the cladding into a complex multi-material composite, with through-thickness concentration profile that causes cladding ductility to vary by more than an order of magnitude between ID and OD. However, current practice of mechanical property testing treats the cladding as a homogeneous material characterized by a single stress-strain curve, regardless of its hydride morphology. Consequently, as irradiation conditions and hydrides evolution change, new material property testing is required, which results in a state of continuous need for valid material property data. A recently developed constitutive model, treats the cladding as a multi-material composite in which the metal and the hydride platelets are treated as separate material phases with their own elastic-plastic and fracture properties and interacting at their interfaces with appropriate constraint conditions between them to ensure strain and stress compatibility. An essential feature of the model is a multi-phase damage formulation that models the complex interaction between the hydride phases and the metal matrix and the coupled effect of radial and circumferential hydrides on cladding stress-strain response. This gives the model the capability of directly predicting cladding failure progression during the loading event and, as such, provides a unique tool for constructing failure criteria analytically where none could be developed by conventional material testing. Implementation of the model in a fuel behavior code provides the capability to predict in-reactor operational failures due to PCI or missing pellet surfaces (MPS) without having to rely on failure criteria. Even, a stronger motivation for use of the model is in the transportation accidents analysis of spent fuel

  5. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  6. Murine Models of Heart Failure With Preserved Ejection Fraction

    Directory of Open Access Journals (Sweden)

    Maria Valero-Muñoz, PhD

    2017-12-01

    Full Text Available Heart failure with preserved ejection fraction (HFpEF is characterized by signs and symptoms of heart failure in the presence of a normal left ventricular ejection fraction. Despite accounting for up to 50% of all clinical presentations of heart failure, the mechanisms implicated in HFpEF are poorly understood, thus precluding effective therapy. The pathophysiological heterogeneity in the HFpEF phenotype also contributes to this disease and likely to the absence of evidence-based therapies. Limited access to human samples and imperfect animal models that completely recapitulate the human HFpEF phenotype have impeded our understanding of the mechanistic underpinnings that exist in this disease. Aging and comorbidities such as atrial fibrillation, hypertension, diabetes and obesity, pulmonary hypertension, and renal dysfunction are highly associated with HFpEF, yet the relationship and contribution between them remains ill-defined. This review discusses some of the distinctive clinical features of HFpEF in association with these comorbidities and highlights the advantages and disadvantage of commonly used murine models used to study the HFpEF phenotype.

  7. Employment status at time of first hospitalization for heart failure is associated with a higher risk of death and rehospitalization for heart failure

    DEFF Research Database (Denmark)

    Rørth, Rasmus; Fosbøl, Emil L; Mogensen, Ulrik M

    2018-01-01

    AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization in a nation......AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization...

  8. ISSUES ASSOCIATED WITH PROBABILISTIC FAILURE MODELING OF DIGITAL SYSTEMS

    International Nuclear Information System (INIS)

    CHU, T.L.; MARTINEZ-GURIDI, G.; LIHNER, J.; OVERLAND, D.

    2004-01-01

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process of instrumentation and control (I and C) systems is based on deterministic requirements, e.g., single failure criteria, and defense in depth and diversity. Probabilistic considerations can be used as supplements to the deterministic process. The National Research Council has recommended development of methods for estimating failure probabilities of digital systems, including commercial off-the-shelf (COTS) equipment, for use in probabilistic risk assessment (PRA). NRC staff has developed informal qualitative and quantitative requirements for PRA modeling of digital systems. Brookhaven National Laboratory (BNL) has performed a review of the-state-of-the-art of the methods and tools that can potentially be used to model digital systems. The objectives of this paper are to summarize the review, discuss the issues associated with probabilistic modeling of digital systems, and identify potential areas of research that would enhance the state of the art toward a satisfactory modeling method that could be integrated with a typical probabilistic risk assessment

  9. Physical and theoretical modeling of rock slopes against block-flexure toppling failure

    Directory of Open Access Journals (Sweden)

    Mehdi Amini

    2015-12-01

    Full Text Available Block-flexure is the most common mode of toppling failure in natural and excavated rock slopes. In such failure, some rock blocks break due to tensile stresses and some overturn under their own weights and then all of them topple together. In this paper, first, a brief review of previous studies on toppling failures is presented. Then, the physical and mechanical properties of experimental modeling materials are summarized. Next, the physical modeling results of rock slopes with the potential of block-flexural toppling failures are explained and a new analytical solution is proposed for the stability analysis of such slopes. The results of this method are compared with the outcomes of the experiments. The comparative studies show that the proposed analytical approach is appropriate for the stability analysis of rock slopes against block-flexure toppling failure. Finally, a real case study is used for the practical verification of the suggested method.

  10. A model for quantification of temperature profiles via germination times

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik

    2013-01-01

    Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...

  11. Using recurrent neural network models for early detection of heart failure onset.

    Science.gov (United States)

    Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng

    2017-03-01

    We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  12. Timing of pregnancy, postpartum risk of virologic failure and loss to follow-up among HIV-positive women.

    Science.gov (United States)

    Onoya, Dorina; Sineke, Tembeka; Brennan, Alana T; Long, Lawrence; Fox, Matthew P

    2017-07-17

    We assessed the association between the timing of pregnancy with the risk of postpartum virologic failure and loss from HIV care in South Africa. This is a retrospective cohort study of 6306 HIV-positive women aged 15-49 at antiretroviral therapy (ART) initiation, initiated on ART between January 2004 and December 2013 in Johannesburg, South Africa. The incidence of virologic failure (two consecutive viral load measurements of >1000 copies/ml) and loss to follow-up (>3 months late for a visit) during 24 months postpartum were assessed using Cox proportional hazards modelling. The rate of postpartum virologic failure was higher following an incident pregnancy on ART [adjusted hazard ratio 1.8, 95% confidence interval (CI): 1.1-2.7] than among women who initiated ART during pregnancy. This difference was sustained among women with CD4 cell count less than 350 cells/μl at delivery (adjusted hazard ratio 1.8, 95% CI: 1.1-3.0). Predictors of postpartum virologic failure were being viremic, longer time on ART, being 25 or less years old and low CD4 cell count and anaemia at delivery, as well as initiating ART on stavudine-containing or abacavir-containing regimen. There was no difference postpartum loss to follow-up rates between the incident pregnancies group (hazard ratio 0.9, 95% CI: 0.7-1.1) and those who initiated ART in pregnancy. The risk of virologic failure remains high among postpartum women, particularly those who conceive on ART. The results highlight the need to provide adequate support for HIV-positive women with fertility intention after ART initiation and to strengthen monitoring and retention efforts for postpartum women to sustain the benefits of ART.

  13. Comparison of US/FRG accident condition models for HTGR fuel failure and radionuclide release

    International Nuclear Information System (INIS)

    Verfondern, K.

    1991-03-01

    The objective was to compare calculation models used in safety analyses in the US and FRG which describe fission product release behavior from TRISO coated fuel particles under core heatup accident conditions. The frist step performed is the qualitative comparison of both sides' fuel failure and release models in order to identify differences and similarities in modeling assumptions and inputs. Assumptions of possible particle failure mechanisms under accident conditions (SiC degradation, pressure vessel) are principally the same on both sides though they are used in different modeling approaches. The characterization of a standard (= intact) coated particle to be of non-releasing (GA) or possibly releasing (KFA/ISF) type is one of the major qualitative differences. Similar models are used regarding radionuclide release from exposed particle kernels. In a second step, a quantitative comparison of the calculation models was made by assessing a benchmark problem predicting particle failure and radionuclide release under MHTGR conduction cooldown accident conditions. Calculations with each side's reference method have come to almost the same failure fractions after 250 hours for the core region with maximum core heatup temperature despite the different modeling approaches of SORS and PANAMA-I. The comparison of the results of particle failure obtained with the Integrated Failure and Release Model for Standard Particles and its revision provides a 'verification' of these models in this sense that the codes (SORS and PANAMA-II, and -III, respectively) which were independently developed lead to very good agreement in the predictions. (orig./HP) [de

  14. Modeling cascading failures with the crisis of trust in social networks

    Science.gov (United States)

    Yi, Chengqi; Bao, Yuanyuan; Jiang, Jingchi; Xue, Yibo

    2015-10-01

    In social networks, some friends often post or disseminate malicious information, such as advertising messages, informal overseas purchasing messages, illegal messages, or rumors. Too much malicious information may cause a feeling of intense annoyance. When the feeling exceeds a certain threshold, it will lead social network users to distrust these friends, which we call the crisis of trust. The crisis of trust in social networks has already become a universal concern and an urgent unsolved problem. As a result of the crisis of trust, users will cut off their relationships with some of their untrustworthy friends. Once a few of these relationships are made unavailable, it is likely that other friends will decline trust, and a large portion of the social network will be influenced. The phenomenon in which the unavailability of a few relationships will trigger the failure of successive relationships is known as cascading failure dynamics. To our best knowledge, no one has formally proposed cascading failures dynamics with the crisis of trust in social networks. In this paper, we address this potential issue, quantify the trust between two users based on user similarity, and model the minimum tolerance with a nonlinear equation. Furthermore, we construct the processes of cascading failures dynamics by considering the unique features of social networks. Based on real social network datasets (Sina Weibo, Facebook and Twitter), we adopt two attack strategies (the highest trust attack (HT) and the lowest trust attack (LT)) to evaluate the proposed dynamics and to further analyze the changes of the topology, connectivity, cascading time and cascade effect under the above attacks. We numerically find that the sparse and inhomogeneous network structure in our cascading model can better improve the robustness of social networks than the dense and homogeneous structure. However, the network structure that seems like ripples is more vulnerable than the other two network

  15. An Enhanced Preventive Maintenance Optimization Model Based on a Three-Stage Failure Process

    Directory of Open Access Journals (Sweden)

    Ruifeng Yang

    2015-01-01

    Full Text Available Nuclear power plants are highly complex systems and the issues related to their safety are of primary importance. Probabilistic safety assessment is regarded as the most widespread methodology for studying the safety of nuclear power plants. As maintenance is one of the most important factors for affecting the reliability and safety, an enhanced preventive maintenance optimization model based on a three-stage failure process is proposed. Preventive maintenance is still a dominant maintenance policy due to its easy implementation. In order to correspond to the three-color scheme commonly used in practice, the lifetime of system before failure is divided into three stages, namely, normal, minor defective, and severe defective stages. When the minor defective stage is identified, two measures are considered for comparison: one is that halving the inspection interval only when the minor defective stage is identified at the first time; the other one is that if only identifying the minor defective stage, the subsequent inspection interval is halved. Maintenance is implemented immediately once the severe defective stage is identified. Minimizing the expected cost per unit time is our objective function to optimize the inspection interval. Finally, a numerical example is presented to illustrate the effectiveness of the proposed models.

  16. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  17. A zipper network model of the failure mechanics of extracellular matrices.

    Science.gov (United States)

    Ritter, Michael C; Jesudason, Rajiv; Majumdar, Arnab; Stamenovic, Dimitrije; Buczek-Thomas, Jo Ann; Stone, Phillip J; Nugent, Matthew A; Suki, Béla

    2009-01-27

    Mechanical failure of soft tissues is characteristic of life-threatening diseases, including capillary stress failure, pulmonary emphysema, and vessel wall aneurysms. Failure occurs when mechanical forces are sufficiently high to rupture the enzymatically weakened extracellular matrix (ECM). Elastin, an important structural ECM protein, is known to stretch beyond 200% strain before failing. However, ECM constructs and native vessel walls composed primarily of elastin and proteoglycans (PGs) have been found to fail at much lower strains. In this study, we hypothesized that PGs significantly contribute to tissue failure. To test this, we developed a zipper network model (ZNM), in which springs representing elastin are organized into long wavy fibers in a zipper-like formation and placed within a network of springs mimicking PGs. Elastin and PG springs possessed distinct mechanical and failure properties. Simulations using the ZNM showed that the failure of PGs alone reduces the global failure strain of the ECM well below that of elastin, and hence, digestion of elastin does not influence the failure strain. Network analysis suggested that whereas PGs drive the failure process and define the failure strain, elastin determines the peak and failure stresses. Predictions of the ZNM were experimentally confirmed by measuring the failure properties of engineered elastin-rich ECM constructs before and after digestion with trypsin, which cleaves the core protein of PGs without affecting elastin. This study reveals a role for PGs in the failure properties of engineered and native ECM with implications for the design of engineered tissues.

  18. Crack phantoms: localized damage correlations and failure in network models of disordered materials

    International Nuclear Information System (INIS)

    Zaiser, M; Moretti, P; Lennartz-Sassinek, S

    2015-01-01

    We study the initiation of failure in network models of disordered materials such as random fuse and spring models, which serve as idealized representations of fracture processes in quasi-two-dimensional, disordered material systems. We consider two different geometries, namely rupture of thin sheets and delamination of thin films, and demonstrate that irrespective of geometry and implementation of the disorder (random failure thresholds versus dilution disorder) failure initiation is associated with the emergence of typical localized correlation structures in the damage patterns. These structures (‘crack phantoms’) exhibit well-defined characteristic lengths, which relate to the failure stress by scaling relations that are typical for critical crack nuclei in disorder-free materials. We discuss our findings in view of the fundamental nature of failure processes in materials with random microstructural heterogeneity. (paper)

  19. Do telemonitoring projects of heart failure fit the Chronic Care Model?

    Science.gov (United States)

    Willemse, Evi; Adriaenssens, Jef; Dilles, Tinne; Remmen, Roy

    2014-07-01

    This study describes the characteristics of extramural and transmural telemonitoring projects on chronic heart failure in Belgium. It describes to what extent these telemonitoring projects coincide with the Chronic Care Model of Wagner. The Chronic Care Model describes essential components for high-quality health care. Telemonitoring can be used to optimise home care for chronic heart failure. It provides a potential prospective to change the current care organisation. This qualitative study describes seven non-invasive home-care telemonitoring projects in patients with heart failure in Belgium. A qualitative design, including interviews and literature review, was used to describe the correspondence of these home-care telemonitoring projects with the dimensions of the Chronic Care Model. The projects were situated in primary and secondary health care. Their primary goal was to reduce the number of readmissions for chronic heart failure. None of these projects succeeded in a final implementation of telemonitoring in home care after the pilot phase. Not all the projects were initiated to accomplish all of the dimensions of the Chronic Care Model. A central role for the patient was sparse. Limited financial resources hampered continuation after the pilot phase. Cooperation and coordination in telemonitoring appears to be major barriers but are, within primary care as well as between the lines of care, important links in follow-up. This discrepancy can be prohibitive for deployment of good chronic care. Chronic Care Model is recommended as basis for future.

  20. Expert Performance and Time Pressure: Implications for Automation Failures in Aviation

    Science.gov (United States)

    2016-09-30

    settled by these two studies. To help resolve the disagreement between the previous research findings, the present work used a computerized chess...communication between the automation and the pilots should also be helpful , but it is doubtful that the system designer or the real-time automation can...Performance and Time Pressure: Implications for Automation Failures in Aviation 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  1. Time course of recovery following resistance training leading or not to failure.

    Science.gov (United States)

    Morán-Navarro, Ricardo; Pérez, Carlos E; Mora-Rodríguez, Ricardo; de la Cruz-Sánchez, Ernesto; González-Badillo, Juan José; Sánchez-Medina, Luis; Pallarés, Jesús G

    2017-12-01

    To describe the acute and delayed time course of recovery following resistance training (RT) protocols differing in the number of repetitions (R) performed in each set (S) out of the maximum possible number (P). Ten resistance-trained men undertook three RT protocols [S × R(P)]: (1) 3 × 5(10), (2) 6 × 5(10), and (3) 3 × 10(10) in the bench press (BP) and full squat (SQ) exercises. Selected mechanical and biochemical variables were assessed at seven time points (from - 12 h to + 72 h post-exercise). Countermovement jump height (CMJ) and movement velocity against the load that elicited a 1 m s -1 mean propulsive velocity (V1) and 75% 1RM in the BP and SQ were used as mechanical indicators of neuromuscular performance. Training to muscle failure in each set [3 × 10(10)], even when compared to completing the same total exercise volume [6 × 5(10)], resulted in a significantly higher acute decline of CMJ and velocity against the V1 and 75% 1RM loads in both BP and SQ. In contrast, recovery from the 3 × 5(10) and 6 × 5(10) protocols was significantly faster between 24 and 48 h post-exercise compared to 3 × 10(10). Markers of acute (ammonia, growth hormone) and delayed (creatine kinase) fatigue showed a markedly different course of recovery between protocols, suggesting that training to failure slows down recovery up to 24-48 h post-exercise. RT leading to failure considerably increases the time needed for the recovery of neuromuscular function and metabolic and hormonal homeostasis. Avoiding failure would allow athletes to be in a better neuromuscular condition to undertake a new training session or competition in a shorter period of time.

  2. Change over time in the effect of grade and ER on risk of distant failure in patients treated with breast-conserving therapy

    International Nuclear Information System (INIS)

    Gelman, Rebecca; Nixon, Asa J.; O'Neill, Anne; Harris, Jay R.

    1996-01-01

    Purpose: Most analyses of the effect of patient and tumor characteristics on long-term outcome in breast cancer use the Cox proportional hazard (prohaz) model, which assumes that hazard rates for any two subsets are proportional (i.e., hazard ratios are constant) over time. We examined whether this assumption is correct for predicting time to distant failure in breast cancer patients treated with breast-conserving therapy and speculated on the biologic implications of these findings. Materials and Methods: Between 1968 and 1986, 1081 patients treated for clinical stage I or II invasive breast cancer with a complete gross excision and ≥60 Gy to the tumor bed had pure infiltrating ductal carcinoma on central pathologic review. 37 patients (3%) were lost to followup after 7-181 months. Median followup for 694 survivors was 12 years (8-23 yrs.). Time to distant failure was defined to be time to regional nodal failure or distant metastases and was not censored for local recurrence, contralateral breast cancer, or death from other causes. We evaluated the following characteristics: histologic grade (modified Bloom-Richardson, 219 grade I, 482 grade II, 380 grade III), estrogen receptor (252 ER neg, 386 ER pos, 443 ER unk), positive axillary nodes (0,1-3,≥4, no axillary dissection in 214), adjuvant chemotherapy (in 291 patients), T stage, lymphatic vessel invasion, mononuclear cell response, clinical size in mm, age at diagnosis, and necrosis. Results: A stepwise prohaz model found all the above characteristics except the last three to be significant (all p 0 (i.e., grade III has a larger risk) but for following years, the log hazard ratio is < 0 (i.e., grade II has a large risk; see Figure for estimated log hazard ratio and 95% CI). The test for non-proportionality of grade II vs. grade I (p=0.08) and ER positive vs negative (p=0.06) were suggestive but the log hazard ratios never cross 0 (i.e., no reversal of direction of risk). Conclusions: Tumor grade clearly

  3. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  4. Cascading failures in interdependent systems under a flow redistribution model

    Science.gov (United States)

    Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  5. Failure rate analysis using GLIMMIX

    International Nuclear Information System (INIS)

    Moore, L.M.; Hemphill, G.M.; Martz, H.F.

    1998-01-01

    This paper illustrates use of a recently developed SAS macro, GLIMMIX, for implementing an analysis suggested by Wolfinger and O'Connell (1993) in modeling failure count data with random as well as fixed factor effects. Interest in this software tool arose from consideration of modernizing the Failure Rate Analysis Code (FRAC), developed at Los Alamos National Laboratory in the early 1980's by Martz, Beckman and McInteer (1982). FRAC is a FORTRAN program developed to analyze Poisson distributed failure count data as a log-linear model, possibly with random as well as fixed effects. These statistical modeling assumptions are a special case of generalized linear mixed models, identified as GLMM in the current statistics literature. In the nearly 15 years since FRAC was developed, there have been considerable advances in computing capability, statistical methodology and available statistical software tools allowing worthwhile consideration of the tasks of modernizing FRAC. In this paper, the approaches to GLMM estimation implemented in GLIMMIX and in FRAC are described and a comparison of results for the two approaches is made with data on catastrophic time-dependent pump failures from a report by Martz and Whiteman (1984). Additionally, statistical and graphical model diagnostics are suggested and illustrated with the GLIMMIX analysis results

  6. Modeling the recurrent failure to thrive in less than two-year children: recurrent events survival analysis.

    Science.gov (United States)

    Saki Malehi, Amal; Hajizadeh, Ebrahim; Ahmadi, Kambiz; Kholdi, Nahid

    2014-01-01

    This study aimes to evaluate the failure to thrive (FTT) recurrent event over time. This longitudinal study was conducted during February 2007 to July 2009. The primary outcome was growth failure. The analysis was done using 1283 children who had experienced FTT several times, based on recurrent events analysis. Fifty-nine percent of the children had experienced the FTT at least one time and 5.3% of them had experienced it up to four times. The Prentice-Williams-Peterson (PWP) model revealed significant relationship between diarrhea (HR=1.26), respiratory infections (HR=1.25), urinary tract infections (HR=1.51), discontinuation of breast-feeding (HR=1.96), teething (HR=1.18), initiation age of complementary feeding (HR=1.11) and hazard rate of the first FTT event. Recurrence nature of the FTT is a main problem, which taking it into account increases the accuracy in analysis of FTT event process and can lead to identify different risk factors for each FTT recurrences.

  7. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  8. Genetic variants of age at menopause are not related to timing of ovarian failure in breast cancer survivors.

    Science.gov (United States)

    Homer, Michael V; Charo, Lindsey M; Natarajan, Loki; Haunschild, Carolyn; Chung, Karine; Mao, Jun J; DeMichele, Angela M; Su, H Irene

    2017-06-01

    To determine if interindividual genetic variation in single-nucleotide polymorphisms (SNPs) related to age at natural menopause is associated with risk of ovarian failure in breast cancer survivors. A prospective cohort of 169 premenopausal breast cancer survivors recruited at diagnosis with stages 0 to III disease were followed longitudinally for menstrual pattern via self-reported daily menstrual diaries. Participants were genotyped for 13 SNPs previously found to be associated with age at natural menopause: EXO1, TLK1, HELQ, UIMC1, PRIM1, POLG, TMEM224, BRSK1, and MCM8. A risk variable summed the total number of risk alleles in each participant. The association between individual genotypes, and also the risk variable, and time to ovarian failure (>12 months of amenorrhea) was tested using time-to-event methods. Median age at enrollment was 40.5 years (range 20.6-46.1). The majority of participants were white (69%) and underwent chemotherapy (76%). Thirty-eight participants (22%) experienced ovarian failure. None of the candidate SNPs or the summary risk variable was significantly associated with time to ovarian failure. Sensitivity analysis restricted to whites or only to participants receiving chemotherapy yielded similar findings. Older age, chemotherapy exposure, and lower body mass index were related to shorter time to ovarian failure. Thirteen previously identified genetic variants associated with time to natural menopause were not related to timing of ovarian failure in breast cancer survivors.

  9. Transitions of Care Between Acute and Chronic Heart Failure: Critical Steps in the Design of a Multidisciplinary Care Model for the Prevention of Rehospitalization.

    Science.gov (United States)

    Comín-Colet, Josep; Enjuanes, Cristina; Lupón, Josep; Cainzos-Achirica, Miguel; Badosa, Neus; Verdú, José María

    2016-10-01

    Despite advances in the treatment of heart failure, mortality, the number of readmissions, and their associated health care costs are very high. Heart failure care models inspired by the chronic care model, also known as heart failure programs or heart failure units, have shown clinical benefits in high-risk patients. However, while traditional heart failure units have focused on patients detected in the outpatient phase, the increasing pressure from hospital admissions is shifting the focus of interest toward multidisciplinary programs that concentrate on transitions of care, particularly between the acute phase and the postdischarge phase. These new integrated care models for heart failure revolve around interventions at the time of transitions of care. They are multidisciplinary and patient-centered, designed to ensure continuity of care, and have been demonstrated to reduce potentially avoidable hospital admissions. Key components of these models are early intervention during the inpatient phase, discharge planning, early postdischarge review and structured follow-up, advanced transition planning, and the involvement of physicians and nurses specialized in heart failure. It is hoped that such models will be progressively implemented across the country. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  10. Convex models and probabilistic approach of nonlinear fatigue failure

    International Nuclear Information System (INIS)

    Qiu Zhiping; Lin Qiang; Wang Xiaojun

    2008-01-01

    This paper is concerned with the nonlinear fatigue failure problem with uncertainties in the structural systems. In the present study, in order to solve the nonlinear problem by convex models, the theory of ellipsoidal algebra with the help of the thought of interval analysis is applied. In terms of the inclusion monotonic property of ellipsoidal functions, the nonlinear fatigue failure problem with uncertainties can be solved. A numerical example of 25-bar truss structures is given to illustrate the efficiency of the presented method in comparison with the probabilistic approach

  11. Experiments and modeling of ballistic penetration using an energy failure criterion

    Directory of Open Access Journals (Sweden)

    Dolinski M.

    2015-01-01

    Full Text Available One of the most intricate problems in terminal ballistics is the physics underlying penetration and perforation. Several penetration modes are well identified, such as petalling, plugging, spall failure and fragmentation (Sedgwick, 1968. In most cases, the final target failure will combine those modes. Some of the failure modes can be due to brittle material behavior, but penetration of ductile targets by blunt projectiles, involving plugging in particular, is caused by excessive localized plasticity, with emphasis on adiabatic shear banding (ASB. Among the theories regarding the onset of ASB, new evidence was recently brought by Rittel et al. (2006, according to whom shear bands initiate as a result of dynamic recrystallization (DRX, a local softening mechanism driven by the stored energy of cold work. As such, ASB formation results from microstructural transformations, rather than from thermal softening. In our previous work (Dolinski et al., 2010, a failure criterion based on plastic strain energy density was presented and applied to model four different classical examples of dynamic failure involving ASB formation. According to this criterion, a material point starts to fail when the total plastic strain energy density reaches a critical value. Thereafter, the strength of the element decreases gradually to zero to mimic the actual material mechanical behavior. The goal of this paper is to present a new combined experimental-numerical study of ballistic penetration and perforation, using the above-mentioned failure criterion. Careful experiments are carried out using a single combination of AISI 4340 FSP projectiles and 25[mm] thick RHA steel plates, while the impact velocity, and hence the imparted damage, are systematically varied. We show that our failure model, which includes only one adjustable parameter in this present work, can faithfully reproduce each of the experiments without any further adjustment. Moreover, it is shown that the

  12. Cladding failure by local plastic instability

    International Nuclear Information System (INIS)

    Kramer, J.M.; Deitrich, L.W.

    1977-01-01

    Cladding failure is one of the major considerations in analysis of fast-reactor fuel pin behavior during hypothetical accident transients since time, location and nature of failure govern the early post-failure material motion and reactivity feedback. Out-of-Pile transient burst tests of both irradiated and unirradiated fast-reactor cladding show that local plastic instability, or bulging, often precedes rupture. To investigate the details of cladding bulging, a perturbation analysis of the equations governing the large deformation of a cylindrical shell has been developed. The overall deformation history is assumed to consist of a small perturbation epsilon of the radial displacement superimposed on large axisymmetric displacements. Computations have been carried out using high temperature properties of stainless steel in conjunction with various constitutive theories, including a generalization of the Endochronic Theory of Plasticity in which both time-independent and time-dependent plastic strains are modeled. Although the results of the calculations are all qualitatively similar, it appears that modeling of both time-independent and time-dependent plastic strains is necessary to interpret the transient burst test results. Sources for bulge formation that have been considered include initial geometric imperfections and thermal perturbations due to either eccentric fuel pellets or non-symmetric cooling. (Auth.)

  13. Physics of Failure Models for Capacitor Degradation in DC-DC Converters

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper proposes a combined energy-based model with an empirical physics of failure model for degradation analysis and prognosis of electrolytic capacitors in...

  14. On modelling of lateral buckling failure in flexible pipe tensile armour layers

    DEFF Research Database (Denmark)

    Østergaard, Niels Højen; Lyckegaard, Anders; Andreasen, Jens H.

    2012-01-01

    In the present paper, a mathematical model which is capable of representing the physics of lateral buckling failure in the tensile armour layers of flexible pipes is introduced. Flexible pipes are unbounded composite steel–polymer structures, which are known to be prone to lateral wire buckling...... when exposed to repeated bending cycles and longitudinal compression, which mainly occurs during pipe laying in ultra-deep waters. On the basis of multiple single wire analyses, the mechanical behaviour of both layers of tensile armour wires can be determined. Since failure in one layer destabilises...... the torsional equilibrium which is usually maintained between the layers, lateral wire buckling is often associated with a severe pipe twist. This behaviour is discussed and modelled. Results are compared to a pipe model, in which failure is assumed not to cause twist. The buckling modes of the tensile armour...

  15. Predictive Simulation of Material Failure Using Peridynamics -- Advanced Constitutive Modeling, Verification and Validation

    Science.gov (United States)

    2016-03-31

    AFRL-AFOSR-VA-TR-2016-0309 Predictive simulation of material failure using peridynamics- advanced constitutive modeling, verification , and validation... Self -explanatory. 8. PERFORMING ORGANIZATION REPORT NUMBER. Enter all unique alphanumeric report numbers assigned by the performing organization, e.g...for public release. Predictive simulation of material failure using peridynamics-advanced constitutive modeling, verification , and validation John T

  16. Effects of the combined action of axial and transversal loads on the failure time of a wooden beam under fire

    International Nuclear Information System (INIS)

    Nubissie, A.; Kingne Talla, E.; Woafo, P.

    2012-01-01

    Highlights: ► A wooden beam submitted to fire and axial and transversal loads is considered. ► The failure time is found to increase with the intensity of the loads. ► Implication for safety consideration is indicated. -- Abstract: This paper presents the variations of the failure time of a wooden beam (Baillonella toxisperma also called Moabi) in fire subjected to the combined effect of axial and transversal loads. Using the recommendation of the structural Eurocodes that the failure can occur when the deflection attains 1/300 of the length of the beam or when the bending moment attains the resistant moment, the partial differential equation describing the beam dynamics is solved numerically and the failure time calculated. It is found that the failure time decreases when either the axial or transversal loads increases.

  17. Implementation of a PETN failure model using ARIA's general chemistry framework

    Energy Technology Data Exchange (ETDEWEB)

    Hobbs, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-01

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model, implementation, and validation.

  18. A model for the coupling of failure rates in a redundant system

    International Nuclear Information System (INIS)

    Kleppmann, W.G.; Wutschig, R.

    1986-01-01

    A model is developed which takes into acount the coupling between failure rates or identical components in different redundancies of a safety system, i.e., the fact that the failure rates of identical components subjected to the same operating conditions will scatter less than the failure rates of any two components of the same type. It is shown that with increasing coupling the expectation value and the variance of the distribution of the failure probability of the redundant system increases. A consistent way to incorporate operating experience in a Bayesian framework is developed and the reults are presented. (orig.)

  19. A mid-layer model for human reliability analysis: understanding the cognitive causes of human failure events

    International Nuclear Information System (INIS)

    Shen, Song-Hua; Chang, James Y.H.; Boring, Ronald L.; Whaley, April M.; Lois, Erasmia; Langfitt Hendrickson, Stacey M.; Oxstrand, Johanna H.; Forester, John Alan; Kelly, Dana L.; Mosleh, Ali

    2010-01-01

    The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.

  20. A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring; James Y. H. Chang; Song-Hua Shen; Ali Mosleh; Johanna H. Oxstrand; John A. Forester; Dana L. Kelly; Erasmia L. Lois

    2010-06-01

    The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.

  1. Reliability analysis of Markov history-dependent repairable systems with neglected failures

    International Nuclear Information System (INIS)

    Du, Shijia; Zeng, Zhiguo; Cui, Lirong; Kang, Rui

    2017-01-01

    Markov history-dependent repairable systems refer to the Markov repairable systems in which some states are changeable and dependent on recent evolutional history of the system. In practice, many Markov history-dependent repairable systems are subjected to neglected failures, i.e., some failures do not affect system performances if they can be repaired promptly. In this paper, we develop a model based on the theory of aggregated stochastic processes to describe the history-dependent behavior and the effect of neglected failures on the Markov history-dependent repairable systems. Based on the developed model, instantaneous and steady-state availabilities are derived to characterize the reliability of the system. Four reliability-related time distributions, i.e., distribution for the k th working period, distribution for the k th failure period, distribution for the real working time in an effective working period, distribution for the neglected failure time in an effective working period, are also derived to provide a more comprehensive description of the system's reliability. Thanks to the power of the theory of aggregated stochastic processes, closed-form expressions are obtained for all the reliability indexes and time distributions. Finally, the developed indexes and analysis methods are demonstrated by a numerical example. - Highlights: • Markovian history-dependent repairable systems with neglected failures is modeled. • Aggregated stochastic processes are used to derive reliability indexes and time distributions. • Closed-form expressions are derived for the considered indexes and distributions.

  2. Pin failure modeling of the A series CABRI tests

    International Nuclear Information System (INIS)

    Young, M.F.; Portugal, J.L.

    1978-01-01

    The EXPAND pin fialure model, a research tool designed to model pin failure under prompt burst conditions, has been used to predict failure conditions for several of the A series CABRI tests as part of the United States participation in the CABRI Joint Project. The Project is an international program involving France, Germany, England, Japan, and the United States and has the goal of obtaining experimental data relating to the safety of LMFBR's. The A series, designed to simulate high ramp rate TOP conditions, initially utilizes single, fresh UO 2 pins of the PHENIX type in a flowing sodium loop. The pins are preheated at constant power in the CABRI reactor to establish steady state conditions (480 w/cm at the axial peak) and then subjected to a power pulse of 14 ms to 24 ms duration

  3. Upgrade of Common Cause Failure Modelling of NPP Krsko PSA

    International Nuclear Information System (INIS)

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2006-01-01

    Over the last thirty years the probabilistic safety assessments (PSA) have been increasingly applied in technical engineering practice. Various failure modes of system of concern are mathematically and explicitly modelled by means of fault tree structure. Statistical independence of basic events from which the fault tree is built is not acceptable for an event category referred to as common cause failures (CCF). Based on overview of current international status of modelling of common cause failures in PSA several steps were made related to primary technical basis for methodology and data used for CCF model upgrade project in NPP Krsko (NEK) PSA. As a primary technical basis for methodological aspects of CCF modelling in Krsko PSA the following documents were considered: NUREG/CR-5485, NUREG/CR-4780, and Westinghouse Owners Group documents (WOG) WCAP-15674 and WCAP-15167. Use of these documents is supported by the most relevant guidelines and standards in the field, such as ASME PRA Standard and NRC Regulatory Guide 1.200. WCAP documents are in compliance with NUREG/CR-5485 and NUREG/CR-4780. Additionally, they provide WOG perspective on CCF modelling, which is important to consider since NEK follows WOG practice in resolving many generic and regulatory issues. It is, therefore, desirable that NEK CCF methodology and modelling is in general accordance with recommended WOG approaches. As a primary basis for CCF data needed to estimate CCF model parameters and their uncertainty, the main used documents were: NUREG/CR-5497, NUREG/CR-6268, WCAP-15167, and WCAP-16187. Use of NUREG/CR-5497 and NUREG/CR-6268 as a source of data for CCF parameter estimating is supported by the most relevant industry and regulatory PSA guides and standards currently existing in the field, including WOG. However, the WCAP document WCAP-16187 has provided a basis for CCF parameter values specific to Westinghouse PWR plants. Many of events from NRC / INEEL database were re-classified in WCAP

  4. Toward a predictive model for the failure of elastomer seals.

    Science.gov (United States)

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash; Baker Hughes Collaboration

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas industry. During exposure to the extreme temperatures and pressures typical of well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. Using computer simulations, we investigate this problem at two different length- and time-scales. First, starting with our model of NBR based on the OPLS all-atom force-field, we develop a chemically-inspired description of HNBR, where C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, a coarse-grained approach is taken in order to study mechanical behaviour and to overcome the length- and time-scale limitations inherent to the all-atom model. The effect of nanoparticle fillers added to the elastomer matrix is investigated. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions. Baker Hughes.

  5. Prolonged warm ischemia time is associated with graft failure and mortality after kidney transplantation.

    Science.gov (United States)

    Tennankore, Karthik K; Kim, S Joseph; Alwayn, Ian P J; Kiberd, Bryce A

    2016-03-01

    Warm ischemia time is a potentially modifiable insult to transplanted kidneys, but little is known about its effect on long-term outcomes. Here we conducted a study of United States kidney transplant recipients (years 2000-2013) to determine the association between warm ischemia time (the time from organ removal from cold storage to reperfusion with warm blood) and death/graft failure. Times under 10 minutes were potentially attributed to coding error. Therefore, the 10-to-under-20-minute interval was chosen as the reference group. The primary outcome was mortality and graft failure (return to chronic dialysis or preemptive retransplantation) adjusted for recipient, donor, immunologic, and surgical factors. The study included 131,677 patients with 35,901 events. Relative to the reference patients, times of 10 to under 20, 20 to under 30, 30 to under 40, 40 to under 50, 50 to under 60, and 60 and more minutes were associated with hazard ratios of 1.07 (95% confidence interval, 0.99-1.15), 1.13 (1.06-1.22), 1.17 (1.09-1.26), 1.20 (1.12-1.30), and 1.23 (1.15-1.33) for the composite event, respectively. Association between prolonged warm ischemia time and death/graft failure persisted after stratification by donor type (living vs. deceased donor) and delayed graft function status. Thus, warm ischemia time is associated with adverse long-term patient and graft survival after kidney transplantation. Identifying strategies to reduce warm ischemia time is an important consideration for future study. Copyright © 2015 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  6. A quasi-static algorithm that includes effects of characteristic time scales for simulating failures in brittle materials

    KAUST Repository

    Liu, Jinxing

    2013-04-24

    When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).

  7. A simple approach to modeling ductile failure.

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  8. Modelling Dynamic Behaviour and Spall Failure of Aluminium Alloy AA7010

    Science.gov (United States)

    Ma'at, N.; Nor, M. K. Mohd; Ismail, A. E.; Kamarudin, K. A.; Jamian, S.; Ibrahim, M. N.; Awang, M. K.

    2017-10-01

    A finite strain constitutive model to predict the dynamic deformation behaviour of Aluminium Alloy 7010 including shockwaves and spall failure is developed in this work. The important feature of this newly hyperelastic-plastic constitutive formulation is a new Mandel stress tensor formulated using new generalized orthotropic pressure. This tensor is combined with a shock equation of state (EOS) and Grady spall failure. The Hill’s yield criterion is adopted to characterize plastic orthotropy by means of the evolving structural tensors that is defined in the isoclinic configuration. This material model was developed and integration into elastic and plastic parts. The elastic anisotropy is taken into account through the newly stress tensor decomposition of a generalized orthotropic pressure. Plastic anisotropy is considered through yield surface and an isotropic hardening defined in a unique alignment of deviatoric plane within the stress space. To test its ability to describe shockwave propagation and spall failure, the new material model was implemented into the LLNL-DYNA3D code of UTHM’s. The capability of this newly constitutive model were compared against published experimental data of Plate Impact Test at 234m/s, 450m/s and 895m/s impact velocities. A good agreement is obtained between experimental and simulation in each test.

  9. Failure Prediction for Autonomous Driving

    OpenAIRE

    Hecker, Simon; Dai, Dengxin; Van Gool, Luc

    2018-01-01

    The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is important that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places ...

  10. Statistical analysis on failure-to-open/close probability of motor-operated valve in sodium system

    International Nuclear Information System (INIS)

    Kurisaka, Kenichi

    1998-08-01

    The objective of this work is to develop basic data for examination on efficiency of preventive maintenance and actuation test from the standpoint of failure probability. This work consists of a statistical trend analysis of valve failure probability in a failure-to-open/close mode on time since installation and time since last open/close action, based on the field data of operating- and failure-experience. In this work, the terms both dependent and independent on time were considered in the failure probability. The linear aging model was modified and applied to the first term. In this model there are two terms with both failure rates in proportion to time since installation and to time since last open/close-demand. Because of sufficient statistical population, motor-operated valves (MOV's) in sodium system were selected to be analyzed from the CORDS database which contains operating data and failure data of components in the fast reactors and sodium test facilities. According to these data, the functional parameters were statistically estimated to quantify the valve failure probability in a failure-to-open/close mode, with consideration of uncertainty. (J.P.N.)

  11. Modeling Dynamic Anisotropic Behaviour and Spall Failure in Commercial Aluminium Alloys AA7010

    Science.gov (United States)

    Mohd Nor, M. K.; Ma'at, N.; Ho, C. S.

    2018-04-01

    This paper presents a finite strain constitutive model to predict a complex elastoplastic deformation behaviour involves very high pressures and shockwaves in orthotropic materials of aluminium alloys. The previous published constitutive model is used as a reference to start the development in this work. The proposed formulation that used a new definition of Mandel stress tensor to define Hill's yield criterion and a new shock equation of state (EOS) of the generalised orthotropic pressure is further enhanced with Grady spall failure model to closely predict shockwave propagation and spall failure in the chosen commercial aluminium alloy. This hyperelastic-plastic constitutive model is implemented as a new material model in the Lawrence Livermore National Laboratory (LLNL)-DYNA3D code of UTHM's version, named Material Type 92 (Mat92). The implementations of a new EOS of the generalised orthotropic pressure including the spall failure are also discussed in this paper. The capability of the proposed constitutive model to capture the complex behaviour of the selected material is validated against range of Plate Impact Test data at 234, 450 and 895 ms-1 impact velocities.

  12. Control-Oriented Models for Real-Time Simulation of Automotive Transmission Systems

    Directory of Open Access Journals (Sweden)

    Cavina N.

    2015-01-01

    Full Text Available A control-oriented model of a Dual Clutch Transmission (DCT was developed for real-time Hardware In the Loop (HIL applications, to support model-based development of the DCT controller and to systematically test its performance. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a simulation step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, which was then implemented in a HIL system and connected to the Transmission Control Unit (TCU. Several tests have been performed on the HIL simulator, to verify the TCU performance: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control actions performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. A test automation procedure has finally been developed to permit the execution of a pattern of tests without the interaction of the user; perfectly repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

  13. Novel risk stratification with time course assessment of in-hospital mortality in patients with acute heart failure.

    Directory of Open Access Journals (Sweden)

    Takeshi Yagyu

    Full Text Available Patients with acute heart failure (AHF show various clinical courses during hospitalization. We aimed to identify time course predictors of in-hospital mortality and to establish a sequentially assessable risk model.We enrolled 1,035 consecutive AHF patients into derivation (n = 597 and validation (n = 438 cohorts. For risk assessments at admission, we utilized Get With the Guidelines-Heart Failure (GWTG-HF risk scores. We examined significant predictors of in-hospital mortality from 11 variables obtained during hospitalization and developed a risk stratification model using multiple logistic regression analysis. Across both cohorts, 86 patients (8.3% died during hospitalization. Using backward stepwise selection, we identified five time-course predictors: catecholamine administration, minimum platelet concentration, maximum blood urea nitrogen, total bilirubin, and C-reactive protein levels; and established a time course risk score that could sequentially assess a patient's risk status. The addition of a time course risk score improved the discriminative ability of the GWTG-HF risk score (c-statistics in derivation and validation cohorts: 0.776 to 0.888 [p = 0.002] and 0.806 to 0.902 [p<0.001], respectively. A calibration plot revealed a good relationship between observed and predicted in-hospital mortalities in both cohorts (Hosmer-Lemeshow chi-square statistics: 6.049 [p = 0.642] and 5.993 [p = 0.648], respectively. In each group of initial low-intermediate risk (GWTG-HF risk score <47 and initial high risk (GWTG-HF risk score ≥47, in-hospital mortality was about 6- to 9-fold higher in the high time course risk score group than in the low-intermediate time course risk score group (initial low-intermediate risk group: 20.3% versus 2.2% [p<0.001], initial high risk group: 57.6% versus 8.5% [p<0.001].A time course assessment related to in-hospital mortality during the hospitalization of AHF patients can clearly categorize a patient's on

  14. ANALYSIS OF RELIABILITY OF NONRECTORABLE REDUNDANT POWER SYSTEMS TAKING INTO ACCOUNT COMMON FAILURES

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2014-01-01

    Full Text Available Reliability Analysis of nonrestorable redundant power Systems of industrial plants and other consumers of electric energy was carried out. The main attention was paid to numbers failures influence, caused by failures of all elements of System due to one general reason. Noted the main possible reasons of common failures formation. Two main indicators of reliability of non-restorable systems are considered: average time of no-failure operation and mean probability of no-failure operation. Modeling of failures were carried out by mean of division of investigated system into two in-series connected subsystems, one of them indicated independent failures, but the other indicated common failures. Due to joined modeling of single and common failures resulting intensity of failures is the amount incompatible components: intensity statistically independent failures and intensity of common failures of elements and system in total.It is shown the influence of common failures of elements on average time of no-failure operation of system. There is built the scale of preference of systems according to criterion of  average time maximum of no-failure operation, depending on portion of common failures. It is noticed that such common failures don’t influence on the scale of preference, but  change intervals of time, determining the moments of systems failures and excepting them from the number of comparators. There were discussed two problems  of conditionally optimization of  systems’  reservation choice, taking into account their reliability and cost. The first problem is solved due to criterion of minimum cost of system providing mean probability of no-failure operation, the second problem is solved due to criterion of maximum of mean probability of no-failure operation with cost limitation of system.

  15. MODELS OF INSULIN RESISTANCE AND HEART FAILURE

    Science.gov (United States)

    Velez, Mauricio; Kohli, Smita; Sabbah, Hani N.

    2013-01-01

    The incidence of heart failure (HF) and diabetes mellitus is rapidly increasing and is associated with poor prognosis. In spite of the advances in therapy, HF remains a major health problem with high morbidity and mortality. When HF and diabetes coexist, clinical outcomes are significantly worse. The relationship between these two conditions has been studied in various experimental models. However, the mechanisms for this interrelationship are complex, incompletely understood, and have become a matter of considerable clinical and research interest. There are only few animal models that manifest both HF and diabetes. However, the translation of results from these models to human disease is limited and new models are needed to expand our current understanding of this clinical interaction. In this review, we discuss mechanisms of insulin signaling and insulin resistance, the clinical association between insulin resistance and HF and its proposed pathophysiologic mechanisms. Finally, we discuss available animal models of insulin resistance and HF and propose requirements for future new models. PMID:23456447

  16. Modeling of Failure Prediction Bayesian Network with Divide-and-Conquer Principle

    Directory of Open Access Journals (Sweden)

    Zhiqiang Cai

    2014-01-01

    Full Text Available For system failure prediction, automatically modeling from historical failure dataset is one of the challenges in practical engineering fields. In this paper, an effective algorithm is proposed to build the failure prediction Bayesian network (FPBN model with data mining technology. First, the conception of FPBN is introduced to describe the state of components and system and the cause-effect relationships among them. The types of network nodes, the directions of network edges, and the conditional probability distributions (CPDs of nodes in FPBN are discussed in detail. According to the characteristics of nodes and edges in FPBN, a divide-and-conquer principle based algorithm (FPBN-DC is introduced to build the best FPBN network structures of different types of nodes separately. Then, the CPDs of nodes in FPBN are calculated by the maximum likelihood estimation method based on the built network. Finally, a simulation study of a helicopter convertor model is carried out to demonstrate the application of FPBN-DC. According to the simulations results, the FPBN-DC algorithm can get better fitness value with the lower number of iterations, which verified its effectiveness and efficiency compared with traditional algorithm.

  17. Failure mode analysis of a PCRV. Influence of some hypothesis

    International Nuclear Information System (INIS)

    Zimmermann, T.; Saugy, B.; Rebora, B.

    1975-01-01

    This paper is concerned with the most recent developments and results obtained using a mathematical model for the non-linear analysis of massive reinforced and prestressed concrete strucures developed by the IPEN at the Swiss Federal Institute of Technology, in Lausanne. The method is based on three-dimensional isoparametric finite elements. A linear solution is adapted step by step to the idealized behavior laws of the materials up to the failure of the structure. The laws proposed here for the non-linear behavior of concrete and steel have been described elsewhere but a simple extension to the time-dependent behavior is presented. A numerical algorithm for the superposition of creep deformations is also proposed, the basic creep law being supposed to satisfy a power expression. Time-dependent failure is discussed. The calculus of a PCRV of a helium cooled fast reactor is then performed and the influence of the liner on the failure mode is analyzed. The failure analysis under increasing internal pressure is run at the present time and the influence of an eventual pressure in the cracks is being investigated. The paper aims mainly to demonstrate the accuracy of a failure analysis by three-dimensional finite-elements and to compare it with a model test, in particular when complete deformation and failure tests of the materials are available. The proposed model has already been extensively tested on simple structures and has proved to be useful for the analysis of different simplifying hypotheses

  18. Time-to-Furosemide Treatment and Mortality in Patients Hospitalized With Acute Heart Failure

    NARCIS (Netherlands)

    Matsue, Yuya; Damman, Kevin; Voors, Adriaan A.; Kagiyama, Nobuyuki; Yamaguchi, Tetsuo; Kuroda, Shunsuke; Okumura, Takahiro; Kida, Keisuke; Mizuno, Atsushi; Oishi, Shogo; Inuzuka, Yasutaka; Akiyama, Eiichi; Matsukawa, Ryuichi; Kato, Kota; Suzuki, Satoshi; Naruke, Takashi; Yoshioka, Kenji; Miyoshi, Tatsuya; Baba, Yuichi; Yamamoto, Masayoshi; Murai, Koji; Mizutani, Kazuo; Yoshida, Kazuki; Kitai, Takeshi

    2017-01-01

    BACKGROUND Acute heart failure (AHF) is a life-threatening disease requiring urgent treatment, including a recommendation for immediate initiation of loop diuretics. OBJECTIVES The authors prospectively evaluated the association between time-to-diuretic treatment and clinical outcome. METHODS

  19. A recursive framework for time-dependent characteristics of tested and maintained standby units with arbitrary distributions for failures and repairs

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2015-01-01

    The time-dependent unavailability and the failure and repair intensities of periodically tested aging standby system components are solved with recursive equations under three categories of testing and repair policies. In these policies, tests or repairs or both can be minimal or perfect renewals. Arbitrary distributions are allowed to times to failure as well as to repair and renewal durations. Major preventive maintenance is done periodically or at random times, e.g. when a true demand occurs. In the third option process renewal is done if a true demand occurs or when a certain mission time has expired since the previous maintenance, whichever occurs first. A practical feature is that even if a repair can renew the unit, it does not generally renew the alternating process. The formalism updates and extends earlier results by using a special backward-renewal equation method, by allowing scheduled tests not limited to equal intervals and accepting arbitrary distributions and multiple failure types and causes, including failures caused by tests, human errors and true demands. Explicit solutions are produced to integral equations associated with an age-renewal maintenance policy. - Highlights: • Time-dependent unavailability, failure count and repair count for a standby system. • Free testing schedule and distributions for times to failure, repair and maintenance. • Multiple failure modes; tests or repairs or both can be minimal or perfect renewals. • Process renewals periodically, randomly or based on the process age or an initiator. • Backward renewal equations as explicit solutions to Volterra-type integral equations

  20. Analysis of grouped data from field-failure reporting systems

    International Nuclear Information System (INIS)

    Coit, David W.; Dey, Kieron A.

    1999-01-01

    Observed reliability data from fielded systems is highly desirable because they implicitly account for all actual usage and environmental stresses. Many companies and large organizations have instituted automated field-failure reporting systems to organize and disseminate these data. Despite these advantages, field data must be used with caution because they often lack sufficient detail. Specifically, the precise times-to-failure are often not recorded and only cumulative failure quantities and operating times are available. When only data of this type are available, it is difficult to determine whether component or system hazard function varies with time or is constant (i.e., exponential distribution). Analysts often use the exponential distribution to model time-to-failure because the distribution parameter can be estimated with just the merged data. However, this can be dangerous if the exponential distribution is not appropriate. An approach is presented in this paper for Type II censored data, with and without replacement, to evaluate this assumption even when individual times-to-failure are not available. A hypothesis test is presented to test the suitability of the exponential distribution for a particular data set composed of multiple merged data records. Two examples are presented to demonstrate the approach. The hypothesis test readily rejects an exponential distribution assumption when the data originate from a Weibull distribution. This is a very important result because it has generally been assumed that time-to-failure data were always required to evaluate the suitability of specific time-to-failure distributions

  1. Filter Design for Failure Detection and Isolation in the Presence of Modeling Erros and Disturbances

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1996-01-01

    The design problem of filters for robust Failure Detectionand Isolation, (FDI) is addressed in this paper. The failure detectionproblem will be considered with respect to both modeling errors anddisturbances. Both an approach based on failure detection observes aswell as an approach based...

  2. Modeling of container failure and radionuclide release from a geologic nuclear waste repository

    International Nuclear Information System (INIS)

    Kim, Chang Lak; Kim, Jhin Wung; Choi, Kwang Sub; Cho, Chan Hee

    1989-02-01

    Generally, two processes are involved in leaching and dissolution; (1) chemical reactions and (2) mass transfer by diffusion. The chemical reaction controls the dissolution rates only during the early stage of exposure to groundwater. The exterior-field mass transfer may control the long-term dissolution rates from the waste solid in a geologic repository. Masstransfer analyses rely on detailed and careful application of the governing equations that describe the mechanistic processes of transport of material between and within phases. We develop analytical models to predict the radionuclide release rate into the groundwater with five different approaches: a measurement-based model, a diffusion model, a kinetics model, a diffusion-and-kinetics model, and a modified diffusion model. We also collected experimental leaching data for a partial validation of the radionuclide release model based on the mass transfer theory. Among various types of corrosions, pitting is the most significant because of its rapid growth. The failure time of the waste container, which also can be interpreted as the containment time, is a milestone of the performance of a repository. We develop analytical models to predict the pit growth rate on the container surface with three different approaches: an experimental method, a statistical method, and a mathematical method based on the diffusion theory. (Author)

  3. Predicting failure response of spot welded joints using recent extensions to the Gurson model

    DEFF Research Database (Denmark)

    Nielsen, Kim Lau

    2010-01-01

    The plug failure modes of resistance spot welded shear-lab and cross-tension test specimens are studied, using recent extensions to the Gurson model. A comparison of the predicted mechanical response is presented when using either: (i) the Gurson-Tvergaard-Needleman model (GTN-model), (ii...... is presented. The models are applied to predict failure of specimens containing a fully intact weld nugget as well as a partly removed weld nugget to address the problems of shrinkage voids or larger weld defects. All analysis are carried out by full 3D finite element modelling....

  4. Failure prediction of low-carbon steel pressure vessel and cylindrical models

    International Nuclear Information System (INIS)

    Zhang, K.D.; Wang, W.

    1987-01-01

    The failure loads predicted by failure assessment methods (namely the net-section stress criterion; the EPRI engineering approach for elastic-plastic analysis; the CEGB failure assessment route; the modified R6 curve by Milne for strain hardening; and the failure assessment curve based on J estimation by Ainsworth) have been compared with burst test results on externally, axially sharp notched pressure vessel and open-ended cylinder models made from typical low-carbon steel St45 seamless tube which has a transverse true stress-strain curve of straight-line and parabola type and a high value of ultimate strength to yield. It was concluded from the comparison that whilst the net-section stress criterion and the CEGB route did not give conservative predictions, Milne's modified curve did give a conservative and good prediction; Ainsworth's curve gave a fairly conservative prediction; and EPRI solutions also could conditionally give a good prediction but the conditions are still somewhat uncertain. It is suggested that Milne's modified R6 curve is used in failure assessment of low-carbon steel pressure vessels. (author)

  5. Development of a container failure function for copper

    International Nuclear Information System (INIS)

    King, F.; Litke, C.D.

    1990-01-01

    A simple approach to the modeling of failure rates for a copper container under Canadian waste disposal conditions is presented. Both uniform corrosion and pitting must be considered. Short-term failures due to fabrication defects must be taken into account. The model allows for short-term sorption of copper by the clay buffer material, and assumes a steady-state condition for uniform corrosion. Using worst-case assumptions, a container penetration time of 3300 years can be predicted

  6. Failure Behavior and Constitutive Model of Weakly Consolidated Soft Rock

    Directory of Open Access Journals (Sweden)

    Wei-ming Wang

    2013-01-01

    Full Text Available Mining areas in western China are mainly located in soft rock strata with poor bearing capacity. In order to make the deformation failure mechanism and strength behavior of weakly consolidated soft mudstone and coal rock hosted in Ili No. 4 mine of Xinjiang area clear, some uniaxial and triaxial compression tests were carried out according to the samples of rocks gathered in the studied area, respectively. Meanwhile, a damage constitutive model which considered the initial damage was established by introducing a damage variable and a correction coefficient. A linearization process method was introduced according to the characteristics of the fitting curve and experimental data. The results showed that samples under different moisture contents and confining pressures presented completely different failure mechanism. The given model could accurately describe the elastic and plastic yield characteristics as well as the strain softening behavior of collected samples at postpeak stage. Moreover, the model could precisely reflect the relationship between the elastic modulus and confining pressure at prepeak stage.

  7. Development of a Zircaloy creep and failure model for LOCA conditions

    International Nuclear Information System (INIS)

    Raff, S.; Meyder, R.

    1981-01-01

    The present status of NORA model for zircaloy-4 creep and failure in the high temperature region (from 600 deg C up to 1200 deg C) is described. Temperature dependence, strain hardening and oxygen content are found to be the most important features of the strain rate creep equation. The failure criterion is based on a modified strain fraction rule. Variables of this criterion are temperature, strain rate or applied stress respectively and oxygen content. Concerning the application of the deformation model, deduced from uniaxial tests, to tube deformation calculation the axial ballooning shape has to be taken into account. Its influence on the tube stress components and therefore on strain rate is discussed. A further improvement of the deformation model concerning yield drop and irregular creep behaviour aims at the enlargement of the range of applicability and reduction of the error band of the model

  8. Visibility graph analysis of heart rate time series and bio-marker of congestive heart failure

    Science.gov (United States)

    Bhaduri, Anirban; Bhaduri, Susmita; Ghosh, Dipak

    2017-09-01

    Study of RR interval time series for Congestive Heart Failure had been an area of study with different methods including non-linear methods. In this article the cardiac dynamics of heart beat are explored in the light of complex network analysis, viz. visibility graph method. Heart beat (RR Interval) time series data taken from Physionet database [46, 47] belonging to two groups of subjects, diseased (congestive heart failure) (29 in number) and normal (54 in number) are analyzed with the technique. The overall results show that a quantitative parameter can significantly differentiate between the diseased subjects and the normal subjects as well as different stages of the disease. Further, the data when split into periods of around 1 hour each and analyzed separately, also shows the same consistent differences. This quantitative parameter obtained using the visibility graph analysis thereby can be used as a potential bio-marker as well as a subsequent alarm generation mechanism for predicting the onset of Congestive Heart Failure.

  9. Filter design for failure detection and isolation in the presence of modeling errors and disturbances

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1996-01-01

    The design problem of filters for robust failure detection and isolation, (FDI) is addressed in this paper. The failure detection problem will be considered with respect to both modeling errors and disturbances. Both an approach based on failure detection observers as well as an approach based...

  10. Evaluation of nuclear power plant component failure probability and core damage probability using simplified PSA model

    International Nuclear Information System (INIS)

    Shimada, Yoshio

    2000-01-01

    It is anticipated that the change of frequency of surveillance tests, preventive maintenance or parts replacement of safety related components may cause the change of component failure probability and result in the change of core damage probability. It is also anticipated that the change is different depending on the initiating event frequency or the component types. This study assessed the change of core damage probability using simplified PSA model capable of calculating core damage probability in a short time period, which is developed by the US NRC to process accident sequence precursors, when various component's failure probability is changed between 0 and 1, or Japanese or American initiating event frequency data are used. As a result of the analysis, (1) It was clarified that frequency of surveillance test, preventive maintenance or parts replacement of motor driven pumps (high pressure injection pumps, residual heat removal pumps, auxiliary feedwater pumps) should be carefully changed, since the core damage probability's change is large, when the base failure probability changes toward increasing direction. (2) Core damage probability change is insensitive to surveillance test frequency change, since the core damage probability change is small, when motor operated valves and turbine driven auxiliary feed water pump failure probability changes around one figure. (3) Core damage probability change is small, when Japanese failure probability data are applied to emergency diesel generator, even if failure probability changes one figure from the base value. On the other hand, when American failure probability data is applied, core damage probability increase is large, even if failure probability changes toward increasing direction. Therefore, when Japanese failure probability data is applied, core damage probability change is insensitive to surveillance tests frequency change etc. (author)

  11. Mode I Failure of Armor Ceramics: Experiments and Modeling

    Science.gov (United States)

    Meredith, Christopher; Leavy, Brian

    2017-06-01

    The pre-notched edge on impact (EOI) experiment is a technique for benchmarking the damage and fracture of ceramics subjected to projectile impact. A cylindrical projectile impacts the edge of a thin rectangular plate with a pre-notch on the opposite edge. Tension is generated at the notch tip resulting in the initiation and propagation of a mode I crack back toward the impact edge. The crack can be quantitatively measured using an optical method called Digital Gradient Sensing, which measures the crack-tip deformation by simultaneously quantifying two orthogonal surface slopes via measuring small deflections of light rays from a specularly reflective surface around the crack. The deflections in ceramics are small so the high speed camera needs to have a very high pixel count. This work reports on the results from pre-crack EOI experiments of SiC and B4 C plates. The experimental data are quantitatively compared to impact simulations using an advanced continuum damage model. The Kayenta ceramic model in Alegra will be used to compare fracture propagation speeds, bifurcations and inhomogeneous initiation of failure will be compared. This will provide insight into the driving mechanisms required for the macroscale failure modeling of ceramics.

  12. Estimation of Continuous Time Models in Economics: an Overview

    OpenAIRE

    Clifford R. Wymer

    2009-01-01

    The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financia...

  13. Mechanistic considerations used in the development of the probability of failure in transient increases in power (PROFIT) pellet-zircaloy cladding (thermo-mechanical-chemical) interactions (pci) fuel failure model

    International Nuclear Information System (INIS)

    Pankaskie, P.J.

    1980-05-01

    A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) interactions (PCI) failure model for estimating the Probability of Failure in Transient Increases in Power (PROFIT) was developed. PROFIT is based on (1) standard statistical methods applied to available PCI fuel failure data and (2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmental and strain-rate dependent Strain Energy Absorption to Failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-dislocation interaction effects in the Zircaloy cladding

  14. Acceleration to failure in geophysical signals prior to laboratory rock failure and volcanic eruptions (Invited)

    Science.gov (United States)

    Main, I. G.; Bell, A. F.; Greenhough, J.; Heap, M. J.; Meredith, P. G.

    2010-12-01

    The nucleation processes that ultimately lead to earthquakes, volcanic eruptions, rock bursts in mines, and landslides from cliff slopes are likely to be controlled at some scale by brittle failure of the Earth’s crust. In laboratory brittle deformation experiments geophysical signals commonly exhibit an accelerating trend prior to dynamic failure. Similar signals have been observed prior to volcanic eruptions, including volcano-tectonic earthquake event and moment release rates. Despite a large amount of effort in the search, no such statistically robust systematic trend is found prior to natural earthquakes. Here we describe the results of a suite of laboratory tests on Mount Etna Basalt and other rocks to examine the nature of the non-linear scaling from laboratory to field conditions, notably using laboratory ‘creep’ tests to reduce the boundary strain rate to conditions more similar to those in the field. Seismic event rate, seismic moment release rate and rate of porosity change show a classic ‘bathtub’ graph that can be derived from a simple damage model based on separate transient and accelerating sub-critical crack growth mechanisms, resulting from separate processes of negative and positive feedback in the population dynamics. The signals exhibit clear precursors based on formal statistical model tests using maximum likelihood techniques with Poisson errors. After correcting for the finite loading time of the signal, the results show a transient creep rate that decays as a classic Omori law for earthquake aftershocks, and remarkably with an exponent near unity, as commonly observed for natural earthquake sequences. The accelerating trend follows an inverse power law when fitted in retrospect, i.e. with prior knowledge of the failure time. In contrast the strain measured on the sample boundary shows a less obvious but still accelerating signal that is often absent altogether in natural strain data prior to volcanic eruptions. To test the

  15. Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2015-01-01

    Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.

  16. Gravity-driven groundwater flow and slope failure potential: 1. Elastic effective-stress model

    Science.gov (United States)

    Iverson, Richard M.; Reid, Mark E.

    1992-01-01

    Hilly or mountainous topography influences gravity-driven groundwater flow and the consequent distribution of effective stress in shallow subsurface environments. Effective stress, in turn, influences the potential for slope failure. To evaluate these influences, we formulate a two-dimensional, steady state, poroelastic model. The governing equations incorporate groundwater effects as body forces, and they demonstrate that spatially uniform pore pressure changes do not influence effective stresses. We implement the model using two finite element codes. As an illustrative case, we calculate the groundwater flow field, total body force field, and effective stress field in a straight, homogeneous hillslope. The total body force and effective stress fields show that groundwater flow can influence shear stresses as well as effective normal stresses. In most parts of the hillslope, groundwater flow significantly increases the Coulomb failure potential Φ, which we define as the ratio of maximum shear stress to mean effective normal stress. Groundwater flow also shifts the locus of greatest failure potential toward the slope toe. However, the effects of groundwater flow on failure potential are less pronounced than might be anticipated on the basis of a simpler, one-dimensional, limit equilibrium analysis. This is a consequence of continuity, compatibility, and boundary constraints on the two-dimensional flow and stress fields, and it points to important differences between our elastic continuum model and limit equilibrium models commonly used to assess slope stability.

  17. A joint spare part and maintenance inspection optimisation model using the Delay-Time concept

    International Nuclear Information System (INIS)

    Wang Wenbin

    2011-01-01

    Spare parts and maintenance are closely related logistics activities where maintenance generates the need for spare parts. When preventive maintenance is present, it may need more spare parts at one time because of the planned preventive maintenance activities. This paper considers the joint optimisation of three decision variables, e.g., the ordering quantity, ordering interval and inspection interval. The model is constructed using the well-known Delay-Time concept where the failure process is divided into a two-stage process. The objective function is the long run expected cost per unit time in terms of the three decision variables to be optimised. Here we use a block-based inspection policy where all components are inspected at the same time regardless of the ages of the components. This creates a situation that the time to failure since the immediate previous inspection is random and has to be modelled by a distribution. This time is called the forward time and a limiting but closed form of such distribution is obtained. We develop an algorithm for the optimal solution of the decision process using a combination of analytical and enumeration approaches. The model is demonstrated by a numerical example. - Highlights: → Joint optimisation of maintenance and spare part inventory. → The use of the Delay-Time concept. → Block-based inspection. → Fixed order interval but variable order quantity.

  18. Modeling combined tension-shear failure of ductile materials

    International Nuclear Information System (INIS)

    Partom, Y

    2014-01-01

    Failure of ductile materials is usually expressed in terms of effective plastic strain. Ductile materials can fail by two different failure modes, shear failure and tensile failure. Under dynamic loading shear failure has to do with shear localization and formation of adiabatic shear bands. In these bands plastic strain rate is very high, dissipative heating is extensive, and shear strength is lost. Shear localization starts at a certain value of effective plastic strain, when thermal softening overcomes strain hardening. Shear failure is therefore represented in terms of effective plastic strain. On the other hand, tensile failure comes about by void growth under tension. For voids in a tension field there is a threshold state of the remote field for which voids grow spontaneously (cavitation), and the material there fails. Cavitation depends on the remote field stress components and on the flow stress. In this way failure in tension is related to shear strength and to failure in shear. Here we first evaluate the cavitation threshold for different remote field situations, using 2D numerical simulations with a hydro code. We then use the results to compute examples of rate dependent tension-shear failure of a ductile material.

  19. Uncertainties and quantification of common cause failure rates and probabilities for system analyses

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2005-01-01

    Simultaneous failures of multiple components due to common causes at random times are modelled by constant multiple-failure rates. A procedure is described for quantification of common cause failure (CCF) basic event probabilities for system models using plant-specific and multiple-plant failure-event data. Methodology is presented for estimating CCF-rates from event data contaminated with assessment uncertainties. Generalised impact vectors determine the moments for the rates of individual systems or plants. These moments determine the effective numbers of events and observation times to be input to a Bayesian formalism to obtain plant-specific posterior CCF-rates. The rates are used to determine plant-specific common cause event probabilities for the basic events of explicit fault tree models depending on test intervals, test schedules and repair policies. Three methods are presented to determine these probabilities such that the correct time-average system unavailability can be obtained with single fault tree quantification. Recommended numerical values are given and examples illustrate different aspects of the methodology

  20. Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection

    Directory of Open Access Journals (Sweden)

    Y Damchi

    2013-12-01

    Full Text Available Appropriate operation of protection system is one of the effective factors to have a desirable reliability in power systems, which vitally needs routine test of protection system. Precise determination of optimum routine test time interval (ORTTI plays a vital role in predicting the maintenance costs of protection system. In the most previous studies, ORTTI has been determined while remote back-up protection system was considered fully reliable. This assumption is not exactly correct since remote back-up protection system may operate incorrectly or fail to operate, the same as the primary protection system. Therefore, in order to determine the ORTTI, an extended Markov model is proposed in this paper considering failure probability for remote back-up protection system. In the proposed Markov model of the protection systems, monitoring facility is taken into account. Moreover, it is assumed that the primary and back-up protection systems are maintained simultaneously. Results show that the effect of remote back-up protection system failures on the reliability indices and optimum routine test intervals of protection system is considerable.

  1. Dam failure analysis/calibration using NWS models on dam failure in Alton, New Hampshire

    International Nuclear Information System (INIS)

    Capone, E.J.

    1998-01-01

    The State of New Hampshire Water Resources Board, the United States Geological Service, and private concerns have compiled data on the cause of a catastrophic failure of the Bergeron Dam in Alton, New Hampshire in March of 1996. Data collected related to the cause of the breach, the breach parameters, the soil characteristics of the failed section, and the limits of downstream flooding. Dam break modeling software was used to calibrate and verify the simulated flood-wave caused by the Bergeron Dam breach. Several scenarios were modeled, using different degrees of detail concerning the topography/channel-geometry of the affected areas. A sensitivity analysis of the important output parameters was completed. The relative importance of model parameters on the results was assessed against the background of observed historical events

  2. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    International Nuclear Information System (INIS)

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  3. Enhancement of Physics-of-Failure Prognostic Models with System Level Features

    National Research Council Canada - National Science Library

    Kacprzynski, Gregory

    2002-01-01

    .... The novelty in the current prognostic tool development is that predictions are made through the fusion of stochastic physics-of-failure models, relevant system or component level health monitoring...

  4. Exact combinatorial reliability analysis of dynamic systems with sequence-dependent failures

    International Nuclear Information System (INIS)

    Xing Liudong; Shrestha, Akhilesh; Dai Yuanshun

    2011-01-01

    Many real-life fault-tolerant systems are subjected to sequence-dependent failure behavior, in which the order in which the fault events occur is important to the system reliability. Such systems can be modeled by dynamic fault trees (DFT) with priority-AND (pAND) gates. Existing approaches for the reliability analysis of systems subjected to sequence-dependent failures are typically state-space-based, simulation-based or inclusion-exclusion-based methods. Those methods either suffer from the state-space explosion problem or require long computation time especially when results with high degree of accuracy are desired. In this paper, an analytical method based on sequential binary decision diagrams is proposed. The proposed approach can analyze the exact reliability of non-repairable dynamic systems subjected to the sequence-dependent failure behavior. Also, the proposed approach is combinatorial and is applicable for analyzing systems with any arbitrary component time-to-failure distributions. The application and advantages of the proposed approach are illustrated through analysis of several examples. - Highlights: → We analyze the sequence-dependent failure behavior using combinatorial models. → The method has no limitation on the type of time-to-failure distributions. → The method is analytical and based on sequential binary decision diagrams (SBDD). → The method is computationally more efficient than existing methods.

  5. Micromechanics-based damage model for failure prediction in cold forming

    Energy Technology Data Exchange (ETDEWEB)

    Lu, X.Z.; Chan, L.C., E-mail: lc.chan@polyu.edu.hk

    2017-04-06

    The purpose of this study was to develop a micromechanics-based damage (micro-damage) model that was concerned with the evolution of micro-voids for failure prediction in cold forming. Typical stainless steel SS316L was selected as the specimen material, and the nonlinear isotropic hardening rule was extended to describe the large deformation of the specimen undergoing cold forming. A micro-focus high-resolution X-ray computed tomography (CT) system was employed to trace and measure the micro-voids inside the specimen directly. Three-dimensional (3D) representative volume element (RVE) models with different sizes and spatial locations were reconstructed from the processed CT images of the specimen, and the average size and volume fraction of micro-voids (VFMV) for the specimen were determined via statistical analysis. Subsequently, the micro-damage model was compiled as a user-defined material subroutine into the finite element (FE) package ABAQUS. The stress-strain responses and damage evolutions of SS316L specimens under tensile and compressive deformations at different strain rates were predicted and further verified experimentally. It was concluded that the proposed micro-damage model is convincing for failure prediction in cold forming of the SS316L material.

  6. Modeling cascading failures in interdependent infrastructures under terrorist attacks

    International Nuclear Information System (INIS)

    Wu, Baichao; Tang, Aiping; Wu, Jie

    2016-01-01

    An attack strength degradation model has been introduced to further capture the interdependencies among infrastructures and model cascading failures across infrastructures when terrorist attacks occur. A medium-sized energy system including oil network and power network is selected for exploring the vulnerabilities from independent networks to interdependent networks, considering the structural vulnerability and the functional vulnerability. Two types of interdependencies among critical infrastructures are involved in this paper: physical interdependencies and geographical interdependencies, shown by tunable parameters based on the probabilities of failures of nodes in the networks. In this paper, a tolerance parameter α is used to evaluation of the overloads of the substations based on power flow redistribution in power transmission systems under the attack. The results of simulation show that the independent networks or interdependent networks will be collapsed when only a small fraction of nodes are attacked under the attack strength degradation model, especially for the interdependent networks. The methodology introduced in this paper with physical interdependencies and geographical interdependencies involved in can be applied to analyze the vulnerability of the interdependent infrastructures further, and provides the insights of vulnerability of interdependent infrastructures to mitigation actions for critical infrastructure protections. - Highlights: • An attack strength degradation model based on the specified locations has been introduced. • Interdependencies considering both physical and geographical have been analyzed. • The structural vulnerability and the functional vulnerability have been considered.

  7. Modeling of Electrical Cable Failure in a Dynamic Assessment of Fire Risk

    Science.gov (United States)

    Bucknor, Matthew D.

    Fires at a nuclear power plant are a safety concern because of their potential to defeat the redundant safety features that provide a high level of assurance of the ability to safely shutdown the plant. One of the added complexities of providing protection against fires is the need to determine the likelihood of electrical cable failure which can lead to the loss of the ability to control or spurious actuation of equipment that is required for safe shutdown. A number of plants are now transitioning from their deterministic fire protection programs to a risk-informed, performance based fire protection program according to the requirements of National Fire Protection Association (NFPA) 805. Within a risk-informed framework, credit can be taken for the analysis of fire progression within a fire zone that was not permissible within the deterministic framework of a 10 CFR 50.48 Appendix R safe shutdown analysis. To perform the analyses required for the transition, plants need to be able to demonstrate with some level of assurance that cables related to safe shutdown equipment will not be compromised during postulated fire scenarios. This research contains the development of new cable failure models that have the potential to more accurately predict electrical cable failure in common cable bundle configurations. Methods to determine the thermal properties of the new models from empirical data are presented along with comparisons between the new models and existing techniques used in the nuclear industry today. A Dynamic Event Tree (DET) methodology is also presented which allows for the proper treatment of uncertainties associated with fire brigade intervention and its effects on cable failure analysis. Finally a shielding analysis is performed to determine the effects on the temperature response of a cable bundle that is shielded from a fire source by an intervening object such as another cable tray. The results from the analyses demonstrate that models of similar

  8. Semi-Markov models control of restorable systems with latent failures

    CERN Document Server

    Obzherin, Yuriy E

    2015-01-01

    Featuring previously unpublished results, Semi-Markov Models: Control of Restorable Systems with Latent Failures describes valuable methodology which can be used by readers to build mathematical models of a wide class of systems for various applications. In particular, this information can be applied to build models of reliability, queuing systems, and technical control. Beginning with a brief introduction to the area, the book covers semi-Markov models for different control strategies in one-component systems, defining their stationary characteristics of reliability and efficiency, and uti

  9. Conduit Stability and Collapse in Explosive Volcanic Eruptions: Coupling Conduit Flow and Failure Models

    Science.gov (United States)

    Mullet, B.; Segall, P.

    2017-12-01

    Explosive volcanic eruptions can exhibit abrupt changes in physical behavior. In the most extreme cases, high rates of mass discharge are interspaced by dramatic drops in activity and periods of quiescence. Simple models predict exponential decay in magma chamber pressure, leading to a gradual tapering of eruptive flux. Abrupt changes in eruptive flux therefore indicate that relief of chamber pressure cannot be the only control of the evolution of such eruptions. We present a simplified physics-based model of conduit flow during an explosive volcanic eruption that attempts to predict stress-induced conduit collapse linked to co-eruptive pressure loss. The model couples a simple two phase (gas-melt) 1-D conduit solution of the continuity and momentum equations with a Mohr-Coulomb failure condition for the conduit wall rock. First order models of volatile exsolution (i.e. phase mass transfer) and fragmentation are incorporated. The interphase interaction force changes dramatically between flow regimes, so smoothing of this force is critical for realistic results. Reductions in the interphase force lead to significant relative phase velocities, highlighting the deficiency of homogenous flow models. Lateral gas loss through conduit walls is incorporated using a membrane-diffusion model with depth dependent wall rock permeability. Rapid eruptive flux results in a decrease of chamber and conduit pressure, which leads to a critical deviatoric stress condition at the conduit wall. Analogous stress distributions have been analyzed for wellbores, where much work has been directed at determining conditions that lead to wellbore failure using Mohr-Coulomb failure theory. We extend this framework to cylindrical volcanic conduits, where large deviatoric stresses can develop co-eruptively leading to multiple distinct failure regimes depending on principal stress orientations. These failure regimes are categorized and possible implications for conduit flow are discussed, including

  10. FRESS pin failure model and its application to E-8 TREAT test

    International Nuclear Information System (INIS)

    Kalimullah.

    1979-01-01

    FRESS is a cladding rupture prediction model for an irradiated mixed-oxide LMFBR fuel pin during transient heating based only on the internal pressurization of the cladding by the fission gas released from fuel grains during the transient. The model is applied to the analysis of the hottest PNL-10-53 pin in the 7-pin E-8 TREAT test which simulates a $3/sec transient overpower. Although the uncertainties of the inputs to the temperature calculation done with the COBRA code have not been included, the uncertain input parameters to FRESS have been varied over their estimated uncertainties. The cladding rupture predictions are a few tens of milliseconds late compared to the most probable failure time detected in the test. However, these calculations seem to indicate that fisson gas pressure is a significant mechanism for causing clad rupture in this test

  11. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  12. Simple bounds for counting processes with monotone rate of occurrence of failures

    International Nuclear Information System (INIS)

    Kaminskiy, Mark P.

    2007-01-01

    The article discusses some aspects of analogy between certain classes of distributions used as models for time to failure of nonrepairable objects, and the counting processes used as models for failure process for repairable objects. The notion of quantiles for the counting processes with strictly increasing cumulative intensity function is introduced. The classes of counting processes with increasing (decreasing) rate of occurrence of failures are considered. For these classes, the useful nonparametric bounds for cumulative intensity function based on one known quantile are obtained. These bounds, which can be used for repairable objects, are similar to the bounds introduced by Barlow and Marshall [Barlow, R. Marshall, A. Bounds for distributions with monotone hazard rate, I and II. Ann Math Stat 1964; 35: 1234-74] for IFRA (DFRA) time to failure distributions applicable to nonrepairable objects

  13. Assessment of compressive failure process of cortical bone materials using damage-based model.

    Science.gov (United States)

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  15. Machine learning in heart failure: ready for prime time.

    Science.gov (United States)

    Awan, Saqib Ejaz; Sohel, Ferdous; Sanfilippo, Frank Mario; Bennamoun, Mohammed; Dwivedi, Girish

    2018-03-01

    The aim of this review is to present an up-to-date overview of the application of machine learning methods in heart failure including diagnosis, classification, readmissions and medication adherence. Recent studies have shown that the application of machine learning techniques may have the potential to improve heart failure outcomes and management, including cost savings by improving existing diagnostic and treatment support systems. Recently developed deep learning methods are expected to yield even better performance than traditional machine learning techniques in performing complex tasks by learning the intricate patterns hidden in big medical data. The review summarizes the recent developments in the application of machine and deep learning methods in heart failure management.

  16. Proof-testing strategies induced by dangerous detected failures of safety-instrumented systems

    International Nuclear Information System (INIS)

    Liu, Yiliu; Rausand, Marvin

    2016-01-01

    Some dangerous failures of safety-instrumented systems (SISs) are detected almost immediately by diagnostic self-testing as dangerous detected (DD) failures, whereas other dangerous failures can only be detected by proof-testing, and are therefore called dangerous undetected (DU) failures. Some items may have a DU- and a DD-failure at the same time. After the repair of a DD-failure is completed, the maintenance team has two options: to perform an insert proof test for DU-failure or not. If an insert proof test is performed, it is necessary to decide whether the next scheduled proof test should be postponed or performed at the scheduled time. This paper analyzes the effects of different testing strategies on the safety performance of a single channel of a SIS. The safety performance is analyzed by Petri nets and by approximation formulas and the results obtained by the two approaches are compared. It is shown that insert testing improves the safety performance of the channel, but the feasibility and cost of the strategy may be a hindrance to recommend insert testing. - Highlights: • Identify the tests induced by detected failures. • Model the testing strategies following DD-failures. • Propose analytical formulas for effects of strategies. • Simulate and verify the proposed models.

  17. Cost-effectiveness analysis of timely dialysis referral after renal transplant failure in Spain

    Directory of Open Access Journals (Sweden)

    Villa Guillermo

    2012-08-01

    Full Text Available Abstract Background A cost-effectiveness analysis of timely dialysis referral after renal transplant failure was undertaken from the perspective of the Public Administration. The current Spanish situation, where all the patients undergoing graft function loss are referred back to dialysis in a late manner, was compared to an ideal scenario where all the patients are timely referred. Methods A Markov model was developed in which six health states were defined: hemodialysis, peritoneal dialysis, kidney transplantation, late referral hemodialysis, late referral peritoneal dialysis and death. The model carried out a simulation of the progression of renal disease for a hypothetical cohort of 1,000 patients aged 40, who were observed in a lifetime temporal horizon of 45 years. In depth sensitivity analyses were performed in order to ensure the robustness of the results obtained. Results Considering a discount rate of 3 %, timely referral showed an incremental cost of 211 €, compared to late referral. This cost increase was however a consequence of the incremental survival observed. The incremental effectiveness was 0.0087 quality-adjusted life years (QALY. When comparing both scenarios, an incremental cost-effectiveness ratio of 24,390 €/QALY was obtained, meaning that timely dialysis referral might be an efficient alternative if a willingness-to-pay threshold of 45,000 €/QALY is considered. This result proved to be independent of the proportion of late referral patients observed. The acceptance probability of timely referral was 61.90 %, while late referral was acceptable in 38.10 % of the simulations. If we however restrict the analysis to those situations not involving any loss of effectiveness, the acceptance probability of timely referral was 70.10 %, increasing twofold that of late referral (29.90 %. Conclusions Timely dialysis referral after graft function loss might be an efficient alternative in Spain, improving both

  18. Cost-effectiveness analysis of timely dialysis referral after renal transplant failure in Spain.

    Science.gov (United States)

    Villa, Guillermo; Sánchez-Álvarez, Emilio; Cuervo, Jesús; Fernández-Ortiz, Lucía; Rebollo, Pablo; Ortega, Francisco

    2012-08-16

    A cost-effectiveness analysis of timely dialysis referral after renal transplant failure was undertaken from the perspective of the Public Administration. The current Spanish situation, where all the patients undergoing graft function loss are referred back to dialysis in a late manner, was compared to an ideal scenario where all the patients are timely referred. A Markov model was developed in which six health states were defined: hemodialysis, peritoneal dialysis, kidney transplantation, late referral hemodialysis, late referral peritoneal dialysis and death. The model carried out a simulation of the progression of renal disease for a hypothetical cohort of 1,000 patients aged 40, who were observed in a lifetime temporal horizon of 45 years. In depth sensitivity analyses were performed in order to ensure the robustness of the results obtained. Considering a discount rate of 3 %, timely referral showed an incremental cost of 211 €, compared to late referral. This cost increase was however a consequence of the incremental survival observed. The incremental effectiveness was 0.0087 quality-adjusted life years (QALY). When comparing both scenarios, an incremental cost-effectiveness ratio of 24,390 €/QALY was obtained, meaning that timely dialysis referral might be an efficient alternative if a willingness-to-pay threshold of 45,000 €/QALY is considered. This result proved to be independent of the proportion of late referral patients observed. The acceptance probability of timely referral was 61.90 %, while late referral was acceptable in 38.10 % of the simulations. If we however restrict the analysis to those situations not involving any loss of effectiveness, the acceptance probability of timely referral was 70.10 %, increasing twofold that of late referral (29.90 %). Timely dialysis referral after graft function loss might be an efficient alternative in Spain, improving both patients' survival rates and health-related quality of life at an

  19. Network Performance Improvement under Epidemic Failures in Optical Transport Networks

    DEFF Research Database (Denmark)

    Fagertun, Anna Manolova; Ruepp, Sarah Renée

    2013-01-01

    In this paper we investigate epidemic failure spreading in large- scale GMPLS-controlled transport networks. By evaluating the effect of the epidemic failure spreading on the network, we design several strategies for cost-effective network performance improvement via differentiated repair times....... First we identify the most vulnerable and the most strategic nodes in the network. Then, via extensive simulations we show that strategic placement of resources for improved failure recovery has better performance than randomly assigning lower repair times among the network nodes. Our OPNET simulation...... model can be used during the network planning process for facilitating cost- effective network survivability design....

  20. Failure detection system risk reduction assessment

    Science.gov (United States)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  1. A phenomenological variational multiscale constitutive model for intergranular failure in nanocrystalline materials

    KAUST Repository

    Siddiq, A.

    2013-09-01

    We present a variational multiscale constitutive model that accounts for intergranular failure in nanocrystalline fcc metals due to void growth and coalescence in the grain boundary region. Following previous work by the authors, a nanocrystalline material is modeled as a two-phase material consisting of a grain interior phase and a grain boundary affected zone (GBAZ). A crystal plasticity model that accounts for the transition from partial dislocation to full dislocation mediated plasticity is used for the grain interior. Isotropic porous plasticity model with further extension to account for failure due to the void coalescence was used for the GBAZ. The extended model contains all the deformation phases, i.e. elastic deformation, plastic deformation including deviatoric and volumetric plasticity (void growth) followed by damage initiation and evolution due to void coalescence. Parametric studies have been performed to assess the model\\'s dependence on the different input parameters. The model is then validated against uniaxial loading experiments for different materials. Lastly we show the model\\'s ability to predict the damage and fracture of a dog-bone shaped specimen as observed experimentally. © 2013 Elsevier B.V.

  2. Relationship between trajectories of serum albumin levels and technique failure according to diabetic status in peritoneal dialysis patients: A joint modeling approach

    Directory of Open Access Journals (Sweden)

    Mehri Khoshhali

    2017-06-01

    Full Text Available Background: In peritoneal dialysis, technique failure is an important metric to be considered. This study was performed in order to identify the relationship between trajectories of serum albumin levels and peritoneal dialysis technique failure on end-stage renal disease patients according to diabetic status. Furthermore, this study was performed to reveal predictors of serum albumin and technique failure simultaneously. Methods: This retrospective cohort study included 300 (189 non-diabetic and 111 diabetic end-stage renal disease patients on continuous ambulatory peritoneal dialysis treated in Al-Zahra Hospital, Isfahan, Iran, from May 2005 to March 2015. Bayesian joint modeling was carried out in order to determine the relationship between trajectories of serum albumin levels and peritoneal dialysis technique failure in the patients according to diabetic status. Death from all causes was considered as a competing risk. Results: Using joint modeling approach, a relationship between trajectories of serum albumin with hazard of transfer to hemodialysis was estimated as −0.720 (95% confidence interval [CI], −0.971 to −0.472 for diabetic and −0.784 (95% CI, −0.963 to −0.587 for non-diabetic patients. From our findings it was showed that predictors of low serum albumin over time were time on peritoneal dialysis for diabetic patients and increase in age and time on peritoneal dialysis, history of previous hemodialysis, and lower body mass index in non-diabetic patients. Conclusion: The results of current study showed that controlling serum albumin over time in non-diabetic and diabetic patients undergoing continuous ambulatory peritoneal dialysis treatment can decrease risk of adverse outcomes during the peritoneal dialysis period.

  3. Declining risk of sudden death in heart failure

    DEFF Research Database (Denmark)

    Shen, Li; Jhund, Pardeep S.; Petrie, Mark C.

    2017-01-01

    BACKGROUND The risk of sudden death has changed over time among patients with symptomatic heart failure and reduced ejection fraction with the sequential introduction of medications including angiotensin-converting-enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, and mineralocorti......BACKGROUND The risk of sudden death has changed over time among patients with symptomatic heart failure and reduced ejection fraction with the sequential introduction of medications including angiotensin-converting-enzyme inhibitors, angiotensin-receptor blockers, beta...... cardioverter-defibrillator at the time of trial enrollment were excluded. Weighted multivariable regression was used to examine trends in rates of sudden death over time. Adjusted hazard ratios for sudden death in each trial group were calculated with the use of Cox regression models. The cumulative incidence...... rates of sudden death were assessed at different time points after randomization and according to the length of time between the diagnosis of heart failure and randomization. RESULTS Sudden death was reported in 3583 patients. Such patients were older and were more often male, with an ischemic cause...

  4. Continuum Damage Mechanics Models for the Analysis of Progressive Failure in Open-Hole Tension Laminates

    Science.gov (United States)

    Song, Kyonchan; Li, Yingyong; Rose, Cheryl A.

    2011-01-01

    The performance of a state-of-the-art continuum damage mechanics model for interlaminar damage, coupled with a cohesive zone model for delamination is examined for failure prediction of quasi-isotropic open-hole tension laminates. Limitations of continuum representations of intra-ply damage and the effect of mesh orientation on the analysis predictions are discussed. It is shown that accurate prediction of matrix crack paths and stress redistribution after cracking requires a mesh aligned with the fiber orientation. Based on these results, an aligned mesh is proposed for analysis of the open-hole tension specimens consisting of different meshes within the individual plies, such that the element edges are aligned with the ply fiber direction. The modeling approach is assessed by comparison of analysis predictions to experimental data for specimen configurations in which failure is dominated by complex interactions between matrix cracks and delaminations. It is shown that the different failure mechanisms observed in the tests are well predicted. In addition, the modeling approach is demonstrated to predict proper trends in the effect of scaling on strength and failure mechanisms of quasi-isotropic open-hole tension laminates.

  5. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  6. A double hit model for the distribution of time to AIDS onset

    Science.gov (United States)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  7. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    Science.gov (United States)

    Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.

    2016-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  8. Quantitative relationships between aging failure data and risk

    International Nuclear Information System (INIS)

    Vesely, W.E.; Vora, J.P.

    1988-01-01

    As part of the United States Nuclear Regulatory Commission's Nuclear Plant Aging Research program, a project is being carried out to quantify the risk effects of aging. The project is called the Risk Evaluation of Aging Phenomena (REAP) Project. With the REAP Project, a procedure has been developed to quantify nuclear power plant risks from aging failure data. The procedure utilizes the linear aging model and its extensions in order to relate component aging failure rates to aging mechanism parameters which are estimable from failure and maintenance data. The aging failure rates can then be used to quantify the age dependent plant risks, such as system unavailabilities, core melt frequency and public health risks. The REAP procedure is different from standard time dependent approaches in that the failure rates are phenomenologically based, allowing engineering information to be utilized. Furthermore, gross data and incomplete data can be utilized. A software package has been developed which systematically analyzes data for aging effects and interfaces with a time dependent risk analysis module to determine the risk implications of the aging effects. (author). 10 refs, 10 figs

  9. Predictors of treatment failure and time to detection and switching in HIV-infected Ethiopian children receiving first line anti-retroviral therapy

    Directory of Open Access Journals (Sweden)

    Bacha Tigist

    2012-08-01

    Full Text Available Abstract Background The emergence of resistance to first line antiretroviral therapy (ART regimen leads to the need for more expensive and less tolerable second line drugs. Hence, it is essential to identify and address factors associated with an increased probability of first line ART regimen failure. The objective of this article is to report on the predictors of first line ART regimen failure, the detection rate of ART regime failure, and the delay in switching to second line ART drugs. Methods A retrospective cohort study was conducted from 2005 to 2011. All HIV infected children under the age of 15 who took first line ART for at least six months at the four major hospitals of Addis Ababa, Ethiopia were included. Data were collected, entered and analyzed using Epi info/ENA version 3.5.1 and SPSS version 16. The Cox proportional-hazard model was used to assess the predictors of first line ART failure. Results Data of 1186 children were analyzed. Five hundred seventy seven (48.8% were males with a mean age of 6.22 (SD = 3.10 years. Of the 167(14.1% children who had treatment failure, 70 (5.9% had only clinical failure, 79 (6.7% had only immunologic failure, and 18 (1.5% had both clinical and immunologic failure. Patients who had height for age in the third percentile or less at initiation of ART were found to have higher probability of ART treatment failure [Adjusted Hazard Ratio (AHR, 3.25 95% CI, 1.00-10.58]. Patients who were less than three years old [AHR, 1.85 95% CI, 1.24-2.76], chronic diarrhea after initiation of antiretroviral treatment [AHR, 3.44 95% CI, 1.37-8.62], ART drug substitution [AHR, 1.70 95% CI, 1.05-2.73] and base line CD4 count below 50 cells/mm3 [AHR, 2.30 95% CI, 1.28-4.14] were also found to be at higher risk of treatment failure. Of all the 167 first line ART failure cases, only 24 (14.4% were switched to second line ART with a mean delay of 24 (SD = 11.67 months. The remaining 143 (85.6% cases were diagnosed

  10. Modelling accelerated degradation data using Wiener diffusion with a time scale transformation.

    Science.gov (United States)

    Whitmore, G A; Schenkelberg, F

    1997-01-01

    Engineering degradation tests allow industry to assess the potential life span of long-life products that do not fail readily under accelerated conditions in life tests. A general statistical model is presented here for performance degradation of an item of equipment. The degradation process in the model is taken to be a Wiener diffusion process with a time scale transformation. The model incorporates Arrhenius extrapolation for high stress testing. The lifetime of an item is defined as the time until performance deteriorates to a specified failure threshold. The model can be used to predict the lifetime of an item or the extent of degradation of an item at a specified future time. Inference methods for the model parameters, based on accelerated degradation test data, are presented. The model and inference methods are illustrated with a case application involving self-regulating heating cables. The paper also discusses a number of practical issues encountered in applications.

  11. Mathematical modeling of a multi-product EMQ model with an enhanced end items issuing policy and failures in rework.

    Science.gov (United States)

    Chiu, Yuan-Shyi Peter; Sung, Peng-Cheng; Chiu, Singa Wang; Chou, Chung-Li

    2015-01-01

    This study uses mathematical modeling to examine a multi-product economic manufacturing quantity (EMQ) model with an enhanced end items issuing policy and rework failures. We assume that a multi-product EMQ model randomly generates nonconforming items. All of the defective are reworked, but a certain portion fails and becomes scraps. When rework process ends and the entire lot of each product is quality assured, a cost reduction n + 1 end items issuing policy is used to transport finished items of each product. As a result, a closed-form optimal production cycle time is obtained. A numerical example demonstrates the practical usage of our result and confirms a significant savings in stock holding and overall production costs as compared to that of a prior work (Chiu et al. in J Sci Ind Res India, 72:435-440 2013) in the literature.

  12. Age and admission times as predictive factors for failure of admissions to discharge-stream short-stay units.

    Science.gov (United States)

    Shetty, Amith L; Shankar Raju, Savitha Banagar; Hermiz, Arsalan; Vaghasiya, Milan; Vukasovic, Matthew

    2015-02-01

    Discharge-stream emergency short-stay units (ESSU) improve ED and hospital efficiency. Age of patients and time of hospital presentations have been shown to correlate with increasing complexity of care. We aim to determine whether an age and time cut-off could be derived to subsequently improve short-stay unit success rates. We conducted a retrospective audit on 6703 (5522 inclusions) patients admitted to our discharge-stream short-stay unit. Patients were classified as appropriate or inappropriate admissions, and deemed successful if discharged out of the unit within 24 h; and failures if they needed inpatient admission into the hospital. We calculated short-stay unit length of stay for patients in each of these groups. A 15% failure rate was deemed as acceptable key performance indicator (KPI) for our unit. There were 197 out of 4621 (4.3%, 95% CI 3.7-4.9%) patients up to the age of 70 who failed admission to ESSU compared with 67 out of 901 (7.4%, 95% CI 5.9-9.3%, P 70 years of age have higher rates of failure after admission to discharge-stream ESSU. Although in appropriately selected discharge-stream patients, no age group or time-band of presentation was associated with increased failure rate beyond the stipulated KPI. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  13. An investigation into failure of Internet firms: Towards development of a conceptual model

    Directory of Open Access Journals (Sweden)

    Jiwat Ram

    2018-02-01

    Full Text Available The last two decades have witnessed an exponential growth in internet and social media based commerce in China, resulting in a number of foreign Internet firms launching their businesses to capitalize on the market opportunities. Surprisingly though, having been successful globally, these firms were not able to remain competitive in China, with majority of them suffering losses from their failed ventures, and ceasing their operations. Despite this ongoing problem, little or no research exists that might explain what is causing these problems. Addressing this gap in knowledge, we build literature-based insights and through our analysis, we: (1 provide a structured understanding of some of the major issues causing failures, (2 identify and categorize factors/sources of failures into internal versus external driven, and (3 grounded in theory and supplemented by literature evidence, develop hypotheses and a corresponding conceptual model explaining the relationships among these factors/sources and the failure of foreign Internet firms. The proposed model serves as a means by which Information systems managers / Chief Information Officers / Technology and Business managers can understand the sources of failures and conduct an introspective exercise within the firm to plug some gaps before launching a business in a foreign country. Academically, the study has developed a theoretically-grounded comprehensive model to advance knowledge in a scantly researched area of challenges faced by the foreign Internet firms and to help in the development of strategies to mitigate these problems. The proposed model also adds to the current knowledge on information systems socio-technical theory and Comparative theory of Competitive Advantage.

  14. Biased resistor network model for electromigration failure and related phenomena in metallic lines

    Science.gov (United States)

    Pennetta, C.; Alfinito, E.; Reggiani, L.; Fantini, F.; Demunari, I.; Scorzoni, A.

    2004-11-01

    Electromigration phenomena in metallic lines are studied by using a biased resistor network model. The void formation induced by the electron wind is simulated by a stochastic process of resistor breaking, while the growth of mechanical stress inside the line is described by an antagonist process of recovery of the broken resistors. The model accounts for the existence of temperature gradients due to current crowding and Joule heating. Alloying effects are also accounted for. Monte Carlo simulations allow the study within a unified theoretical framework of a variety of relevant features related to the electromigration. The predictions of the model are in excellent agreement with the experiments and in particular with the degradation towards electrical breakdown of stressed Al-Cu thin metallic lines. Detailed investigations refer to the damage pattern, the distribution of the times to failure (TTFs), the generalized Black’s law, the time evolution of the resistance, including the early-stage change due to alloying effects and the electromigration saturation appearing at low current densities or for short line lengths. The dependence of the TTFs on the length and width of the metallic line is also well reproduced. Finally, the model successfully describes the resistance noise properties under steady state conditions.

  15. Time-variant reliability assessment through equivalent stochastic process transformation

    International Nuclear Information System (INIS)

    Wang, Zequn; Chen, Wei

    2016-01-01

    Time-variant reliability measures the probability that an engineering system successfully performs intended functions over a certain period of time under various sources of uncertainty. In practice, it is computationally prohibitive to propagate uncertainty in time-variant reliability assessment based on expensive or complex numerical models. This paper presents an equivalent stochastic process transformation approach for cost-effective prediction of reliability deterioration over the life cycle of an engineering system. To reduce the high dimensionality, a time-independent reliability model is developed by translating random processes and time parameters into random parameters in order to equivalently cover all potential failures that may occur during the time interval of interest. With the time-independent reliability model, an instantaneous failure surface is attained by using a Kriging-based surrogate model to identify all potential failure events. To enhance the efficacy of failure surface identification, a maximum confidence enhancement method is utilized to update the Kriging model sequentially. Then, the time-variant reliability is approximated using Monte Carlo simulations of the Kriging model where system failures over a time interval are predicted by the instantaneous failure surface. The results of two case studies demonstrate that the proposed approach is able to accurately predict the time evolution of system reliability while requiring much less computational efforts compared with the existing analytical approach. - Highlights: • Developed a new approach for time-variant reliability analysis. • Proposed a novel stochastic process transformation procedure to reduce the dimensionality. • Employed Kriging models with confidence-based adaptive sampling scheme to enhance computational efficiency. • The approach is effective for handling random process in time-variant reliability analysis. • Two case studies are used to demonstrate the efficacy

  16. Compressive failure model for fiber composites by kink band initiation from obliquely aligned, shear-dislocated fiber breaks

    Energy Technology Data Exchange (ETDEWEB)

    Bai, J.; Phoenix, S.L. [Cornell University, Ithaca, NY (United States). Dept. of Theoretical and Applied Mechanics

    2005-04-01

    Predicting compressive failure of a unidirectional fibrous composite is a longstanding and challenging problem that we study from a new perspective. Motivated by previous modelling of tensile failure as well as experimental observations on compressive failures in single carbon fibers, we develop a new micromechanical model for the compressive failure process in unidirectional, planar composites. As the compressive load is increased, random fiber failures are assumed to occur due to statistically distributed flaws, analogous to what occurs in tension. These breaks are often shear-mode failures with slanted surfaces that induce shear dislocations, especially when they occur in small groups aligned obliquely. Our model includes interactions of dislocated and neighboring intact fibers through a system of fourth-order, differential equations governing transverse deformation, and also allows for local matrix plastic yielding and debonding from the fiber near and within the dislocation arrays. Using the Discrete Fourier Transform method, we find a 'building-block' analytical solution form, which naturally embodies local length scales of fiber microbuckling and instability. Based on the influence function, superposition approach, a computationally efficient scheme is developed to model the evolution of fiber and matrix stresses. Under increasing compressive strain the simulations show that matrix yielding and debonding crucially lead to large increases in bending strains in fibers next to small groups of obliquely aligned, dislocated breaks. From the paired locations of maximum fiber bending in flanking fibers, the triggering of an unstable kink band becomes realistic. The geometric features of the kink band, such as the fragment lengths and orientation angles, will depend on the fiber and matrix mechanical and geometric properties. In carbon fiber-polymer matrix systems our model predicts a much lower compressive failure stress than obtained from Rosen

  17. Real-time instrument-failure detection in the LOFT pressurizer using functional redundancy

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1982-07-01

    The functional redundancy approach to detecting instrument failures in a pressurized water reactor (PWR) pressurizer is described and evaluated. This real-time method uses a bank of Kalman filters (one for each instrument) to generate optimal estimates of the pressurizer state. By performing consistency checks between the output of each filter, failed instruments can be identified. Simulation results and actual pressurizer data are used to demonstrate the capabilities of the technique

  18. Applicability of out-of-pile fretting wear tests to in-reactor fretting wear-induced failure time prediction

    Science.gov (United States)

    Kim, Kyu-Tae

    2013-02-01

    In order to investigate whether or not the grid-to-rod fretting wear-induced fuel failure will occur for newly developed spacer grid spring designs for the fuel lifetime, out-of-pile fretting wear tests with one or two fuel assemblies are to be performed. In this study, the out-of-pile fretting wear tests were performed in order to compare the potential for wear-induced fuel failure in two newly-developed, Korean PWR spacer grid designs. Lasting 20 days, the tests simulated maximum grid-to-rod gap conditions and the worst flow induced vibration effects that might take place over the fuel life time. The fuel rod perforation times calculated from the out-of-pile tests are greater than 1933 days for 2 μm oxidized fuel rods with a 100 μm grid-to-rod gap, whereas those estimated from in-reactor fretting wear failure database may be about in the range of between 60 and 100 days. This large discrepancy in fuel rod perforation may occur due to irradiation-induced cladding oxide microstructure changes on the one hand and a temperature gradient-induced hydrogen content profile across the cladding metal region on the other hand, which may accelerate brittleness in the grid-contacting cladding oxide and metal regions during the reactor operation. A three-phase grid-to-rod fretting wear model is proposed to simulate in-reactor fretting wear progress into the cladding, considering the microstructure changes of the cladding oxide and the hydrogen content profile across the cladding metal region combined with the temperature gradient. The out-of-pile tests cannot be directly applicable to the prediction of in-reactor fretting wear-induced cladding perforations but they can be used only for evaluating a relative wear resistance of one grid design against the other grid design.

  19. FEM simulation of TBC failure in a model system

    Energy Technology Data Exchange (ETDEWEB)

    Seiler, P; Baeker, M; Roesier, J [Institut fuer Werkstoffe (IfW), Technische Universitaet Braunschweig (Germany); Beck, T; Schweda, M, E-mail: p.seiler@tu-bs.d [Institut fuer Energieforschung/ Werkstoffstruktur und -Eigenschaften (IEF 2), Forschungszentrum Juelich (Germany)

    2010-07-01

    In order to study the behavior of the complex failure mechanisms in thermal barrier coatings on turbine blades, a simplified model system is used to reduce the number of system parameters. The artificial system consists of a bond-coat material (fast creeping Fecralloy or slow creeping MA956) as the substrate with a Y{sub 2}O{sub 3} partially stabilized plasma sprayed zircon oxide TBC on top and a TGO between the two layers. A 2-dimensional FEM simulation was developed to calculate the growth stress inside the simplified coating system. The simulation permits the study of failure mechanisms by identifying compression and tension areas which are established by the growth of the oxide layer. This provides an insight into the possible crack paths in the coating and it allows to draw conclusions for optimizing real thermal barrier coating systems.

  20. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    International Nuclear Information System (INIS)

    Coolen, F.P.A.

    1997-01-01

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  1. A Competing Risk Model of First Failure Site after Definitive Chemoradiation Therapy for Locally Advanced Non-Small Cell Lung Cancer.

    Science.gov (United States)

    Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M; Kjær, Andreas; Langer, Seppo W; Aznar, Marianne C; Persson, Gitte F; Bentzen, Søren M

    2018-04-01

    The aim of the study was to build a model of first failure site- and lesion-specific failure probability after definitive chemoradiotherapy for inoperable NSCLC. We retrospectively analyzed 251 patients receiving definitive chemoradiotherapy for NSCLC at a single institution between 2009 and 2015. All patients were scanned by fludeoxyglucose positron emission tomography/computed tomography for radiotherapy planning. Clinical patient data and fludeoxyglucose positron emission tomography standardized uptake values from primary tumor and nodal lesions were analyzed by using multivariate cause-specific Cox regression. In patients experiencing locoregional failure, multivariable logistic regression was applied to assess risk of each lesion being the first site of failure. The two models were used in combination to predict probability of lesion failure accounting for competing events. Adenocarcinoma had a lower hazard ratio (HR) of locoregional failure than squamous cell carcinoma (HR = 0.45, 95% confidence interval [CI]: 0.26-0.76, p = 0.003). Distant failures were more common in the adenocarcinoma group (HR = 2.21, 95% CI: 1.41-3.48, p failure showed that primary tumors were more likely to fail than lymph nodes (OR = 12.8, 95% CI: 5.10-32.17, p failure (OR = 1.26 per unit increase, 95% CI: 1.12-1.40, p failure site-specific competing risk model based on patient- and lesion-level characteristics. Failure patterns differed between adenocarcinoma and squamous cell carcinoma, illustrating the limitation of aggregating them into NSCLC. Failure site-specific models add complementary information to conventional prognostic models. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  2. A simplified time-dependent recovery model as applied to RCP seal LOCAs

    International Nuclear Information System (INIS)

    Kohut, P.; Bozoki, G.; Fitzpatrick, R.

    1991-01-01

    In Westinghouse-designed reactors, the reactor coolant pump (RCP) seals constantly require a modest amount of cooling. This cooling function depends on the service water (SW) system. Upon the loss of the cooling function due to the unavailability of the SW, component cooling water system or electrical power (station blackout), the RCP seals may degrade, resulting in a loss-of-coolant accident (LOCA). Recent studies indicate that the frequency of the loss of SW initiating events is higher than previously thought. This change significantly increases the core damage frequency contribution from RCP seal failure. The most critical/dominant element in the loss of SW events was found to be the SW-induced RCP seal failure. For these potential accident scenarios, there are large uncertainties regarding the actual frequency of RCP seal LOCA, the resulting leakage rate, and time-dependent behavior. The roles of various recovery options based on the time evolution of the seal LOCA have been identified and taken into account in recent NUREG-1150 probabilistic risk assessment PRA analyses. In this paper, a consistent time-dependent recovery model is described that takes into account the effects of various recovery actions based on explicit considerations given to a spectrum of time- and flow-rate dependencies. The model represents a simplified approach but is especially useful when extensive seal leak rate and core uncovery information is unavailable

  3. Combinatorial analysis of systems with competing failures subject to failure isolation and propagation effects

    International Nuclear Information System (INIS)

    Xing Liudong; Levitin, Gregory

    2010-01-01

    This paper considers the reliability analysis of binary-state systems, subject to propagated failures with global effect, and failure isolation phenomena. Propagated failures with global effect are common-cause failures originated from a component of a system/subsystem causing the failure of the entire system/subsystem. Failure isolation occurs when the failure of one component (referred to as a trigger component) causes other components (referred to as dependent components) within the same system to become isolated from the system. On the one hand, failure isolation makes the isolated dependent components unusable; on the other hand, it prevents the propagation of failures originated from those dependent components. However, the failure isolation effect does not exist if failures originated in the dependent components already propagate globally before the trigger component fails. In other words, there exists a competition in the time domain between the failure of the trigger component that causes failure isolation and propagated failures originated from the dependent components. This paper presents a combinatorial method for the reliability analysis of systems subject to such competing propagated failures and failure isolation effect. Based on the total probability theorem, the proposed method is analytical, exact, and has no limitation on the type of time-to-failure distributions for the system components. An illustrative example is given to demonstrate the basics and advantages of the proposed method.

  4. Failure analysis of prestressed concrete beam under impact loading

    International Nuclear Information System (INIS)

    Ishikawa, N.; Sonoda, Y.; Kobayashi, N.

    1993-01-01

    This paper presents a failure analysis of prestressed concrete (PC) beam under impact loading. At first, the failure analysis of PC beam section is performed by using the discrete section element method in order to obtain the dynamic bending moment-curvature relation. Secondary, the failure analysis of PC beam is performed by using the rigid panel-spring model. Finally, the numerical calculation is executed and is compared with the experimental results. It is found that this approach can simulate well the experiments at the local and overall failure of the PC beam as well as the impact load and the displacement-time relations. (author)

  5. A 'cost-effective' probabilistic model to select the dominant factors affecting the variation of the component failure rate

    International Nuclear Information System (INIS)

    Kirchsteiger, C.

    1992-11-01

    Within the framework of a Probabilistic Safety Assessment (PSA), the component failure rate λ is a key parameter in the sense that the study of its behavior gives the essential information for estimating the current values as well as the trends in the failure probabilities of interest. Since there is an infinite variety of possible underlying factors which might cause changes in λ (e.g. operating time, maintenance practices, component environment, etc.), an 'importance ranking' process of these factors is considered most desirable to prioritize research efforts. To be 'cost-effective', the modeling effort must be small, i.e. essentially involving no estimation of additional parameters other than λ. In this paper, using a multivariate data analysis technique and various statistical measures, such a 'cost-effective' screening process has been developed. Dominant factors affecting the failure rate of any components of interest can easily be identified and the appropriateness of current research plans (e.g. on the necessity of performing aging studies) can be validated. (author)

  6. Hydra-Ring: a computational framework to combine failure probabilities

    Science.gov (United States)

    Diermanse, Ferdinand; Roscoe, Kathryn; IJmker, Janneke; Mens, Marjolein; Bouwer, Laurens

    2013-04-01

    This presentation discusses the development of a new computational framework for the safety assessment of flood defence systems: Hydra-Ring. Hydra-Ring computes the failure probability of a flood defence system, which is composed of a number of elements (e.g., dike segments, dune segments or hydraulic structures), taking all relevant uncertainties explicitly into account. This is a major step forward in comparison with the current Dutch practice in which the safety assessment is done separately per individual flood defence section. The main advantage of the new approach is that it will result in a more balanced prioratization of required mitigating measures ('more value for money'). Failure of the flood defence system occurs if any element within the system fails. Hydra-Ring thus computes and combines failure probabilities of the following elements: - Failure mechanisms: A flood defence system can fail due to different failure mechanisms. - Time periods: failure probabilities are first computed for relatively small time scales (assessment of flood defense systems, Hydra-Ring can also be used to derive fragility curves, to asses the efficiency of flood mitigating measures, and to quantify the impact of climate change and land subsidence on flood risk. Hydra-Ring is being developed in the context of the Dutch situation. However, the computational concept is generic and the model is set up in such a way that it can be applied to other areas as well. The presentation will focus on the model concept and probabilistic computation techniques.

  7. Cardioprotective Effect of Resveratrol in a Postinfarction Heart Failure Model

    Directory of Open Access Journals (Sweden)

    Adam Riba

    2017-01-01

    Full Text Available Despite great advances in therapies observed during the last decades, heart failure (HF remained a major health problem in western countries. In order to further improve symptoms and survival in patients with heart failure, novel therapeutic strategies are needed. In some animal models of HF resveratrol (RES, it was able to prevent cardiac hypertrophy, contractile dysfunction, and remodeling. Several molecular mechanisms are thought to be involved in its protective effects, such as inhibition of prohypertrophic signaling molecules, improvement of myocardial Ca2+ handling, regulation of autophagy, and the reduction of oxidative stress and inflammation. In our present study, we wished to further examine the effects of RES on prosurvival (Akt-1, GSK-3β and stress signaling (p38-MAPK, ERK 1/2, and MKP-1 pathways, on oxidative stress (iNOS, COX-2 activity, and ROS formation, and ultimately on left ventricular function, hypertrophy and fibrosis in a murine, and isoproterenol- (ISO- induced postinfarction heart failure model. RES treatment improved left ventricle function, decreased interstitial fibrosis, cardiac hypertrophy, and the level of plasma BNP induced by ISO treatment. ISO also increased the activation of P38-MAPK, ERK1/2Thr183-Tyr185, COX-2, iNOS, and ROS formation and decreased the phosphorylation of Akt-1, GSK-3β, and MKP-1, which were favorably influenced by RES. According to our results, regulation of these pathways may also contribute to the beneficial effects of RES in HF.

  8. Fold catastrophe model of dynamic pillar failure in asymmetric mining

    Energy Technology Data Exchange (ETDEWEB)

    Yue Pan; Ai-wu Li; Yun-song Qi [Qingdao Technological University, Qingdao (China). College of Civil Engineering

    2009-01-15

    A rock burst disaster not only destroys the pit facilities and results in economic loss but it also threatens the life of the miners. Pillar rock burst has a higher frequency of occurrence in the pit compared to other kinds of rock burst. Understanding the cause, magnitude and prevention of pillar rock burst is a significant undertaking. Equations describing the bending moment and displacement of the rock beam in asymmetric mining have been deduced for simplified asymmetric beam-pillar systems. Using the symbolic operation software MAPLE 9.5 a catastrophe model of the dynamic failure of an asymmetric rock-beam pillar system has been established. The differential form of the total potential function deduced from the law of conservation of energy was used for this deduction. The critical conditions and the initial and final positions of the pillar during failure have been given in analytical form. The amount of elastic energy released by the rock beam at the instant of failure is determined as well. A diagrammatic form showing the pillar failure was plotted using MATLAB software. This graph contains a wealth of information and is important for understanding the behavior during each deformation phase of the rock-beam pillar system. The graphic also aids in distinguishing the equivalent stiffness of the rock beam in different directions. 11 refs., 8 figs.

  9. The modelling and control of failure in bi-material ceramic laminates

    International Nuclear Information System (INIS)

    Phillipps, A.J.; Howard, S.J.; Clegg, W.J.; Clyne, T.W.

    1993-01-01

    Recent experimental and theoretical work on simple, single phase, laminated systems has indicated that failure resistant ceramics can be produced using an elegant method that avoids many of the problems and limitations of comparable fibrous ceramic composites. Theoretical work on these laminated systems has shown good agreement with experiment and simulated the effects of material properties and laminate structure on the composite performance. This work has provided guidelines for optimised laminate performance. In the current study, theoretical work has been simply extended to predict the behaviour of bi-material laminates with alternating layers of weak and strong material with different stiffnesses. Expressions for the strain energy release rates of internal advancing cracks are derived and combined with existing criteria to predict the failure behaviour of these laminates during bending. The modelling indicates three modes of failure dictated by the relative proportions, thicknesses and interfacial properties of the weak and strong phases. A critical percentage of strong phase is necessary to improve failure behaviour, in an identical argument to that for fibre composites. Incorporation of compliant layers is also investigated and implications for laminate design discussed. (orig.)

  10. A Panel Analysis Of UK Industrial Company Failure

    OpenAIRE

    Natalia Isachenkova; John Hunter

    2002-01-01

    We examine the failure determinants for large quoted UK industrials using a panel data set comprising 539 firms observed over the period 1988-93. The empirical design employs data from company accounts and is based on Chamberlain’s conditional binomial logit model, which allows for unobservable, firm-specific, time-invariant factors associated with failure risk. We find a noticeable degree of heterogeneity across the sample companies. Our panel results show that, after controll...

  11. Association between Functional Variables and Heart Failure after Myocardial Infarction in Rats

    Energy Technology Data Exchange (ETDEWEB)

    Polegato, Bertha F.; Minicucci, Marcos F.; Azevedo, Paula S.; Gonçalves, Andréa F.; Lima, Aline F.; Martinez, Paula F.; Okoshi, Marina P.; Okoshi, Katashi; Paiva, Sergio A. R.; Zornoff, Leonardo A. M., E-mail: lzornoff@fmb.unesp.br [Faculdade de Medicina de Botucatu - Universidade Estadual Paulista ' Júlio de mesquita Filho' - UNESP Botucatu, SP (Brazil)

    2016-02-15

    Heart failure prediction after acute myocardial infarction may have important clinical implications. To analyze the functional echocardiographic variables associated with heart failure in an infarction model in rats. The animals were divided into two groups: control and infarction. Subsequently, the infarcted animals were divided into groups: with and without heart failure. The predictive values were assessed by logistic regression. The cutoff values predictive of heart failure were determined using ROC curves. Six months after surgery, 88 infarcted animals and 43 control animals were included in the study. Myocardial infarction increased left cavity diameters and the mass and wall thickness of the left ventricle. Additionally, myocardial infarction resulted in systolic and diastolic dysfunction, characterized by lower area variation fraction values, posterior wall shortening velocity, E-wave deceleration time, associated with higher values of E / A ratio and isovolumic relaxation time adjusted by heart rate. Among the infarcted animals, 54 (61%) developed heart failure. Rats with heart failure have higher left cavity mass index and diameter, associated with worsening of functional variables. The area variation fraction, the E/A ratio, E-wave deceleration time and isovolumic relaxation time adjusted by heart rate were functional variables predictors of heart failure. The cutoff values of functional variables associated with heart failure were: area variation fraction < 31.18%; E / A > 3.077; E-wave deceleration time < 42.11 and isovolumic relaxation time adjusted by heart rate < 69.08. In rats followed for 6 months after myocardial infarction, the area variation fraction, E/A ratio, E-wave deceleration time and isovolumic relaxation time adjusted by heart rate are predictors of heart failure onset.

  12. Association between Functional Variables and Heart Failure after Myocardial Infarction in Rats

    International Nuclear Information System (INIS)

    Polegato, Bertha F.; Minicucci, Marcos F.; Azevedo, Paula S.; Gonçalves, Andréa F.; Lima, Aline F.; Martinez, Paula F.; Okoshi, Marina P.; Okoshi, Katashi; Paiva, Sergio A. R.; Zornoff, Leonardo A. M.

    2016-01-01

    Heart failure prediction after acute myocardial infarction may have important clinical implications. To analyze the functional echocardiographic variables associated with heart failure in an infarction model in rats. The animals were divided into two groups: control and infarction. Subsequently, the infarcted animals were divided into groups: with and without heart failure. The predictive values were assessed by logistic regression. The cutoff values predictive of heart failure were determined using ROC curves. Six months after surgery, 88 infarcted animals and 43 control animals were included in the study. Myocardial infarction increased left cavity diameters and the mass and wall thickness of the left ventricle. Additionally, myocardial infarction resulted in systolic and diastolic dysfunction, characterized by lower area variation fraction values, posterior wall shortening velocity, E-wave deceleration time, associated with higher values of E / A ratio and isovolumic relaxation time adjusted by heart rate. Among the infarcted animals, 54 (61%) developed heart failure. Rats with heart failure have higher left cavity mass index and diameter, associated with worsening of functional variables. The area variation fraction, the E/A ratio, E-wave deceleration time and isovolumic relaxation time adjusted by heart rate were functional variables predictors of heart failure. The cutoff values of functional variables associated with heart failure were: area variation fraction < 31.18%; E / A > 3.077; E-wave deceleration time < 42.11 and isovolumic relaxation time adjusted by heart rate < 69.08. In rats followed for 6 months after myocardial infarction, the area variation fraction, E/A ratio, E-wave deceleration time and isovolumic relaxation time adjusted by heart rate are predictors of heart failure onset

  13. Using the failure mode and effects analysis model to improve parathyroid hormone and adrenocorticotropic hormone testing

    Directory of Open Access Journals (Sweden)

    Magnezi R

    2016-12-01

    Full Text Available Racheli Magnezi,1 Asaf Hemi,1 Rina Hemi2 1Department of Management, Public Health and Health Systems Management Program, Bar Ilan University, Ramat Gan, 2Endocrine Service Unit, Sheba Medical Center, Tel Aviv, Israel Background: Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources.Methods: A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures.Results: A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN. For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1.Conclusion: This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. Keywords: failure mode

  14. The development and application of overheating failure model of FBR steam generator tubes. 3

    International Nuclear Information System (INIS)

    Miyake, Osamu; Hamada, Hirotsugu; Tanabe, Hiromi; Wada, Yusaku; Miyakawa, Akira; Okabe, Ayao; Nakai, Ryodai; Hiroi, Hiroshi

    2002-03-01

    The model has been developed for the assessment of the overheating tube failure in an event of sodium-water reaction accident of fast breeder reactor's steam generators (SGs). The model has been applied to the Monju SG studies. Major results obtained in the studies are as follows: 1. To evaluate the structural integrity of tube material, the strength standard for 2. 25Cr-1Mo steel was established taking account of time dependent effect based on the high temperature (700-1200degC) creep data. This standard has been validated with the tube rupture simulation test data. 2. The conditions for overheating by the high temperature reaction were determined by use of the SWAT-3 experimental data. The realistic local heating conditions (reaction zone temperature and related heat transfer conditions) for the sodium-water reaction were proposed as the cosine-shaped temperature profile. 3. For the cooling effects inside of target tubes, LWR's studies of critical heat flux (CHF) and post-CHF heat transfer correlations have been examined and considered in the model. 4. The model has been validated with experimental data obtained by SWAT-3 and LLTR. The results were satisfactory with conservatism. The PFR superheater leak event in 1987 was studied, and the cause of event and the effectiveness of the improvement after the leak event could be identified by the analysis. 5. The model has been applied to the Monju SG studies. It is revealed consequently that no tube failure occurs in 100%, 40%, and 10% water flow operating conditions when an initial leak is detected by the cover gas pressure detection system. (author)

  15. Discordance between 'actual' and 'scheduled' check-in times at a heart failure clinic.

    Science.gov (United States)

    Gorodeski, Eiran Z; Joyce, Emer; Gandesbery, Benjamin T; Blackstone, Eugene H; Taylor, David O; Tang, W H Wilson; Starling, Randall C; Hachamovitch, Rory

    2017-01-01

    A 2015 Institute Of Medicine statement "Transforming Health Care Scheduling and Access: Getting to Now", has increased concerns regarding patient wait times. Although waiting times have been widely studied, little attention has been paid to the role of patient arrival times as a component of this phenomenon. To this end, we investigated patterns of patient arrival at scheduled ambulatory heart failure (HF) clinic appointments and studied its predictors. We hypothesized that patients are more likely to arrive later than scheduled, with progressively later arrivals later in the day. Using a business intelligence database we identified 6,194 unique patients that visited the Cleveland Clinic Main Campus HF clinic between January, 2015 and January, 2017. This clinic served both as a tertiary referral center and a community HF clinic. Transplant and left ventricular assist device (LVAD) visits were excluded. Punctuality was defined as the difference between 'actual' and 'scheduled' check-in times, whereby negative values (i.e., early punctuality) were patients who checked-in early. Contrary to our hypothesis, we found that patients checked-in late only a minority of the time (38% of visits). Additionally, examining punctuality by appointment hour slot we found that patients scheduled after 8AM had progressively earlier check-in times as the day progressed (P < .001 for trend). In both a Random Forest-Regression framework and linear regression models the most important risk-adjusted predictors of early punctuality were: later in the day appointment hour slot, patient having previously been to the hospital, age in the early 70s, and white race. Patients attending a mixed population ambulatory HF clinic check-in earlier than scheduled times, with progressive discrepant intervals throughout the day. This finding may have significant implications for provider utilization and resource planning in order to maximize clinic efficiency. The impact of elective early arrival on

  16. Comparison of linear-elastic-plastic, and fully plastic failure models in the assessment of piping integrity

    International Nuclear Information System (INIS)

    Streit, R.D.

    1981-01-01

    The failure evaluation of Pressurized Water Reactor (PWR) primary coolant loop pipe is often based on a plastic limit load criterion; i.e., failure occurs when the stress on the pipe section exceeds the material flow stress. However, in addition the piping system must be safe against crack propagation at stresses less than those leading to plastic instability. In this paper, elastic, elastic-plastic, and fully-plastic failure models are evaluated, and the requirements for piping integrity based on these models are compared. The model yielding the 'more' critical criteria for the given geometry and loading conditions defines the appropriate failure criterion. The pipe geometry and loading used in this study was choosen based on an evaluation of a guillotine break in a PWR primary coolant loop. It is assumed that the piping may contain cracks. Since a deep circumferential crack, can lead to a guillotine pipe break without prior leaking and thus without warning it is the focus of the failure model comparison study. The hot leg pipe, a 29 in. I.D. by 2.5 in. wall thickness stainless pipe, was modeled in this investigation. Cracks up to 90% through the wall were considered. The loads considered in this evaluation result from the internal pressure, dead weight, and seismic stresses. For the case considered, the internal pressure contributes the most to the failure loading. The maximum moment stress due to the dead weight and seismic moments are simply added to the pressure stress. Thus, with the circumferential crack geometry and uniform pressure stress, the problem is axisymmetric. It is analyzed using NIKE2D--an implicit, finite deformation, finite element code for analyzing two-dimensional elastic-plastic problems. (orig./GL)

  17. Development of failure diagnosis method based on transient information of nuclear power plant

    International Nuclear Information System (INIS)

    Washio, Takashi; Kitamura, Masaharu; Sugiyama, Kazusuke

    1987-01-01

    This paper proposes a new method of failure diagnosis of nuclear power plant (NPP). Transient behavior of the NPP includes ample failure information even for a limited period of time from the failure onset. We tried to develop a diagnosis system with high capability of identifying the failure cause and of estimating failure severeness. The Walsh function transformation of transient time series data and the reduction of the Walsh coefficients into ternary valued amplitude indicators were utilized to extract the essential characteristics of failure. The correspondences of the transient characteristics and causes were summarized in a failure symptom database. A method of ternary tree search using an information measure as a heuristic strategy was adopted to conduct the efficient retrieval of failure causes in the database. Through numerical experiments using a simulation model of a NPP, the diagnostic capability of the system was proved to be satisfactory. (author)

  18. The Impact of Preradiation Residual Disease Volume on Time to Locoregional Failure in Cutaneous Merkel Cell Carcinoma—A TROG Substudy

    Energy Technology Data Exchange (ETDEWEB)

    Finnigan, Renee [Division of Cancer Services, Princess Alexandra Hospital, University of Queensland, Brisbane (Australia); Hruby, George [Department of Radiation Oncology, Sydney Cancer Centre, University of Sydney, Sydney (Australia); Wratten, Chris [Calvary Mater Newcastle Hospital, Newcastle (Australia); Keller, Jacqui; Tripcony, Lee; Dickie, Graeme [Cancer Care Services, Royal Brisbane and Women' s Hospital, Brisbane (Australia); Rischin, Danny [Department of Medical Oncology, Peter MacCallum Cancer Centre, University of Melbourne, Melbourne (Australia); Poulsen, Michael, E-mail: michael_poulsen@health.qld.gov.au [Division of Cancer Services, Princess Alexandra Hospital, University of Queensland, Brisbane (Australia)

    2013-05-01

    Purpose: This study evaluated the impact of margin status and gross residual disease in patients treated with chemoradiation therapy for high-risk stage I and II Merkel cell cancer (MCC). Methods and Materials: Data were pooled from 3 prospective trials in which patients were treated with 50 Gy in 25 fractions to the primary lesion and draining lymph nodes and 2 schedules of carboplatin based chemotherapy. Time to locoregional failure was analyzed according to the burden of disease at the time of radiation therapy, comparing patients with negative margins, involved margins, or macroscopic disease. Results: Analysis was performed on 88 patients, of whom 9 had microscopically positive resection margins and 26 had macroscopic residual disease. The majority of gross disease was confined to nodal regions. The 5-year time to locoregional failure, time to distant failure, time to progression, and disease-specific survival rates for the whole group were 73%, 69%, 62%, and 66% respectively. The hazard ratio for macroscopic disease at the primary site or the nodes was 1.25 (95% confidence interval 0.57-2.77), P=.58. Conclusions: No statistically significant differences in time to locoregional failure were identified between patients with negative margins and those with microscopic or gross residual disease. These results must, however, be interpreted with caution because of the limited sample size.

  19. Failure Mechanism of Rock Bridge Based on Acoustic Emission Technique

    Directory of Open Access Journals (Sweden)

    Guoqing Chen

    2015-01-01

    Full Text Available Acoustic emission (AE technique is widely used in various fields as a reliable nondestructive examination technology. Two experimental tests were carried out in a rock mechanics laboratory, which include (1 small scale direct shear tests of rock bridge with different lengths and (2 large scale landslide model with locked section. The relationship of AE event count and record time was analyzed during the tests. The AE source location technology and comparative analysis with its actual failure model were done. It can be found that whether it is small scale test or large scale landslide model test, AE technique accurately located the AE source point, which reflected the failure generation and expansion of internal cracks in rock samples. Large scale landslide model with locked section test showed that rock bridge in rocky slope has typical brittle failure behavior. The two tests based on AE technique well revealed the rock failure mechanism in rocky slope and clarified the cause of high speed and long distance sliding of rocky slope.

  20. Remaining useful life estimation for deteriorating systems with time-varying operational conditions and condition-specific failure zones

    Directory of Open Access Journals (Sweden)

    Li Qi

    2016-06-01

    Full Text Available Dynamic time-varying operational conditions pose great challenge to the estimation of system remaining useful life (RUL for the deteriorating systems. This paper presents a method based on probabilistic and stochastic approaches to estimate system RUL for periodically monitored degradation processes with dynamic time-varying operational conditions and condition-specific failure zones. The method assumes that the degradation rate is influenced by specific operational condition and moreover, the transition between different operational conditions plays the most important role in affecting the degradation process. These operational conditions are assumed to evolve as a discrete-time Markov chain (DTMC. The failure thresholds are also determined by specific operational conditions and described as different failure zones. The 2008 PHM Conference Challenge Data is utilized to illustrate our method, which contains mass sensory signals related to the degradation process of a commercial turbofan engine. The RUL estimation method using the sensor measurements of a single sensor was first developed, and then multiple vital sensors were selected through a particular optimization procedure in order to increase the prediction accuracy. The effectiveness and advantages of the proposed method are presented in a comparison with existing methods for the same dataset.

  1. Accelerated reliability demonstration under competing failure modes

    International Nuclear Information System (INIS)

    Luo, Wei; Zhang, Chun-hua; Chen, Xun; Tan, Yuan-yuan

    2015-01-01

    The conventional reliability demonstration tests are difficult to apply to products with competing failure modes due to the complexity of the lifetime models. This paper develops a testing methodology based on the reliability target allocation for reliability demonstration under competing failure modes at accelerated conditions. The specified reliability at mission time and the risk caused by sampling of the reliability target for products are allocated for each failure mode. The risk caused by degradation measurement fitting of the target for a product involving performance degradation is equally allocated to each degradation failure mode. According to the allocated targets, the accelerated life reliability demonstration test (ALRDT) plans for the failure modes are designed. The accelerated degradation reliability demonstration test plans and the associated ALRDT plans for the degradation failure modes are also designed. Next, the test plan and the decision rules for the products are designed. Additionally, the effects of the discreteness of sample size and accepted number of failures for failure modes on the actual risks caused by sampling for the products are investigated. - Highlights: • Accelerated reliability demonstration under competing failure modes is studied. • The method is based on the reliability target allocation involving the risks. • The test plan for the products is based on the plans for all the failure modes. • Both failure mode and degradation failure modes are considered. • The error of actual risks caused by sampling for the products is small enough

  2. Development of simplified 1D and 2D models for studying a PWR lower head failure under severe accident conditions

    International Nuclear Information System (INIS)

    Koundy, V.; Dupas, J.; Bonneville, H.; Cormeau, I.

    2005-01-01

    In the study of severe accidents of nuclear pressurized water reactors, the scenarios that describe the relocation of significant quantities of liquid corium at the bottom of the lower head are investigated from the mechanical point of view. In these scenarios, the risk of a breach and the possibility of a large quantity of corium being released from the lower head exist. This may lead to direct heating of the containment or outer vessel steam explosion. These issues are important due to their early containment failure potential. Since the TMI-2 accident, many theoretical and experimental investigations, relating to lower head mechanical behaviour under severe thermo-mechanical loading in the event of a core meltdown accident have been performed. IRSN participated actively in the one-fifth scale USNRC/SNL LHF and OECD LHF (OLHF) programs. Within the framework of these programs, two simplified models were developed by IRSN: the first is a simplified 1D approach based on the theory of pressurized spherical shells and the second is a simplified 2D model based on the theory of shells of revolution under symmetric loading. The mathematical formulation of both models and the creep constitutive equations used are presented in detail in this paper. The corresponding models were used to interpret some of the OLHF program experiments and the calculation results were quite consistent with the experimental data. The two simplified models have been used to simulate the thermo-mechanical behaviour of a 900 MWe pressurized water reactor lower head under severe accident conditions leading to failure. The average transient heat flux produced by the corium relocated at the bottom of the lower head has been determined using the IRSN HARAR code. Two different methods, both taking into account the ablation of the internal surface, are used to determine the temperature profiles across the lower head wall and their effect on the time to failure is discussed. Using these simplified models

  3. Reliability modeling of a hard real-time system using the path-space approach

    International Nuclear Information System (INIS)

    Kim, Hagbae

    2000-01-01

    A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems' reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Wobble failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old

  4. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    Science.gov (United States)

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-03-30

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  5. Tuning critical failure with viscoelasticity: How aftershocks inhibit criticality in an analytical mean field model of fracture.

    Science.gov (United States)

    Baro Urbea, J.; Davidsen, J.

    2017-12-01

    The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible of deformation. Avalanche models involving critical failure have determined universality classes in different systems: from slip events in crystalline and amorphous materials to the jamming of granular media or the fracture of brittle materials. However, not all empirical failure processes exhibit the trademarks of critical failure. As an example, the statistical properties of ultrasonic acoustic events recorded during the failure of porous brittle materials are stationary, except for variations in the activity rate that can be interpreted in terms of aftershock and foreshock activity (J. Baró et al., PRL 2013).The rheological properties of materials introduce dissipation, usually reproduced in atomistic models as a hardening of the coarse-grained elements of the system. If the hardening is associated to a relaxation process the same mechanism is able to generate temporal correlations. We report the analytic solution of a mean field fracture model exemplifying how criticality and temporal correlations are tuned by transient hardening. We provide a physical meaning to the conceptual model by deriving the constitutive equation from the explicit representation of the transient hardening in terms of a generalized viscoelasticity model. The rate of 'aftershocks' is controlled by the temporal evolution of the viscoelastic creep. At the quasistatic limit, the moment release is invariant to rheology. Therefore, the lack of criticality is explained by the increase of the activity rate close to failure, i.e. 'foreshocks'. Finally, the avalanche propagation can be reinterpreted as a pure mathematical problem in terms of a stochastic counting process. The statistical properties depend only on the distance to a critical point, which is universal for any

  6. Component failure data base of TRIGA reactors

    International Nuclear Information System (INIS)

    Djuricic, M.

    2004-10-01

    This compilation provides failure data such as first criticality, component type description (reactor component, population, cumulative calendar time, cumulative operating time, demands, failure mode, failures, failure rate, failure probability) and specific information on each type of component of TRIGA Mark-II reactors in Austria, Bangladesh, Germany, Finland, Indonesia, Italy, Indonesia, Slovenia and Romania. (nevyjel)

  7. A phenomenological variational multiscale constitutive model for intergranular failure in nanocrystalline materials

    KAUST Repository

    Siddiq, A.; El Sayed, Tamer S.

    2013-01-01

    We present a variational multiscale constitutive model that accounts for intergranular failure in nanocrystalline fcc metals due to void growth and coalescence in the grain boundary region. Following previous work by the authors, a nanocrystalline

  8. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  9. Observation Likelihood Model Design and Failure Recovery Scheme Toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2010-12-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  10. Failure time series prediction in industrial maintenance using neural networks; Previsao de series temporais de falhas em manutencao industrial usando redes neurais

    Energy Technology Data Exchange (ETDEWEB)

    Torres Junior, Rubiao G.; Machado, Maria Augusta S. [Instituto Brasileiro de Mercado de Capitais (IBMEC), Rio de Janeiro, RJ (Brazil); Souza, Reinaldo C. [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2005-07-01

    The objective of this work is the application of two failure prediction models in industrial maintenance with the use of Artificial Neural Networks (ANN). A characteristic of the modern industrial environment is a strong competition which leads companies to search for costs minimization methods. Thus, dada gathering and maintenance dada treatment becomes extremely important in this scenario for it aims the equipment and plant systems real repair necessity. Therefore, the objective becomes the widening of the system's full activity in a continuous manner, in the required period, without problems in their integrating parts. A daily time series is modeled based on maintenance interventions pauses dada from a five years period derived form many productive systems in the finalization areas of PETROFLEX Ind. and Com. S.A. Thus, the purpose is to introduce models based on neural networks and verify its system's pauses prediction capacity, so as to intervene with adequate timing before the system fails, extend the operational period and consequently increase its availability. The results obtained in this work demonstrate the employment of Neural Networks in the prediction of pauses in PETROFLEX industrial area maintenance. The ANN's prediction capacity in a group of dada with strong non-linear component where other statistical techniques have shown little efficient has also been confirmed. Discover neural models to predict failure systems time series has enable a breakthrough in the research field, especially due to the market demand. It's no doubt a technique that will evolve in the industrial maintenance area financing important managing decision. Prediction techniques, such as the ones illustrated in this study, work side by side maintenance planning and if carefully implemented and followed up can in the medium run supply a substantial increase in the available operational hours. (author)

  11. Beneficial aspects of real time flow measurements for the management of acute right ventricular heart failure following continuous flow ventricular assist device implantation

    Directory of Open Access Journals (Sweden)

    Spiliopoulos Sotirios

    2012-11-01

    Full Text Available Abstract Background Optimal management of acute right heart failure following the implantation of a left ventricular assist device requires a reliable estimation of left ventricular preload and contractility. This is possible by real-time pump blood flow measurements. Clinical case We performed implantation of a continuous flow left ventricular assist device in a 66 years old female patient with an end-stage heart failure on the grounds of a dilated cardiomyopathy. Real-time pump blood flow was directly measured by an ultrasonic flow probe placed around the outflow graft. Diagnosis The progressive decline of real time flow and the loss of pulsatility were associated with an increase of central venous pressure, inotropic therapy and progressive renal failure suggesting the presence of an acute right heart failure. Diagnosis was validated by echocardiography and thermodilution measurements. Treatment Temporary mechanical circulatory support of the right ventricle was successfully performed. Real time flow measurement proved to be a useful tool for the diagnosis and ultimately for the management of right heart failure including the weaning from extracorporeal membrane oxygenation.

  12. Nurses' decision making in heart failure management based on heart failure certification status.

    Science.gov (United States)

    Albert, Nancy M; Bena, James F; Buxbaum, Denise; Martensen, Linda; Morrison, Shannon L; Prasun, Marilyn A; Stamp, Kelly D

    Research findings on the value of nurse certification were based on subjective perceptions or biased by correlations of certification status and global clinical factors. In heart failure, the value of certification is unknown. Examine the value of certification based nurses' decision-making. Cross-sectional study of nurses who completed heart failure clinical vignettes that reflected decision-making in clinical heart failure scenarios. Statistical tests included multivariable linear, logistic and proportional odds logistic regression models. Of nurses (N = 605), 29.1% were heart failure certified, 35.0% were certified in another specialty/job role and 35.9% were not certified. In multivariable modeling, nurses certified in heart failure (versus not heart failure certified) had higher clinical vignette scores (p = 0.002), reflecting higher evidence-based decision making; nurses with another specialty/role certification (versus no certification) did not (p = 0.62). Heart failure certification, but not in other specialty/job roles was associated with decisions that reflected delivery of high-quality care. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Periodic imperfect preventive maintenance with two categories of competing failure modes

    Energy Technology Data Exchange (ETDEWEB)

    Zequeira, R.I. [ISTIT FRE CNRS 2732-Equipe LM2S, Universite de Technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes (France)]. E-mail: romulo.zequeira@utt.fr; Berenguer, C. [ISTIT FRE CNRS 2732-Equipe LM2S, Universite de Technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes (France)]. E-mail: christophe.berenguer@utt.fr

    2006-04-15

    A maintenance policy is studied for a system with two types of failure modes: maintainable and non-maintainable. The quality of maintenance actions is modelled by its effect on the system failure rate. Preventive maintenance actions restore the system to a condition between as good as new and as bad as immediately before the maintenance action. The model presented permits to study the equipment condition improvement (improvement factor) as a function of the time of the preventive maintenance action. The determination of the maintenance policy, which minimizes the cost rate for an infinite time span, is examined. Conditions are given under which a unique optimal policy exists.

  14. Periodic imperfect preventive maintenance with two categories of competing failure modes

    International Nuclear Information System (INIS)

    Zequeira, R.I.; Berenguer, C.

    2006-01-01

    A maintenance policy is studied for a system with two types of failure modes: maintainable and non-maintainable. The quality of maintenance actions is modelled by its effect on the system failure rate. Preventive maintenance actions restore the system to a condition between as good as new and as bad as immediately before the maintenance action. The model presented permits to study the equipment condition improvement (improvement factor) as a function of the time of the preventive maintenance action. The determination of the maintenance policy, which minimizes the cost rate for an infinite time span, is examined. Conditions are given under which a unique optimal policy exists

  15. An attempt to calibrate and validate a simple ductile failure model against axial-torsion experiments on Al 6061-T651

    Energy Technology Data Exchange (ETDEWEB)

    Reedlunn, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Wei -Yang [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-01-01

    This report details a work in progress. We have attempted to calibrate and validate a Von Mises plasticity model with the Johnson-Cook failure criterion ( Johnson & Cook , 1985 ) against a set of experiments on various specimens of Al 6061-T651. As will be shown, the effort was not successful, despite considerable attention to detail. When the model was com- pared against axial-torsion experiments on tubes, it over predicted failure by 3 x in tension, and never predicted failure in torsion, even when the tube was twisted by 4 x further than the experiment. While this result is unfortunate, it is not surprising. Ductile failure is not well understood. In future work, we will explore whether more sophisticated material mod- els of plasticity and failure will improve the predictions. Selecting the appropriate advanced material model and interpreting the results of said model are not trivial exercises, so it is worthwhile to fully investigate the behavior of a simple plasticity model before moving on to an anisotropic yield surface or a similarly complicated model.

  16. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    Science.gov (United States)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  17. Survival analysis with covariates in combination with multinomial analysis to parametrize time to event for multi-state models

    NARCIS (Netherlands)

    Feenstra, T.L.; Postmus, D.; Quik, E.H.; Langendijk, H.; Krabbe, P.F.M.

    Objectives: Recent ISPOR Good practice guidelines as well as literature encourage to use a single distribution rather than the latent failure approach to model time to event for patient level simulation models with multiple competing outcomes. Aim was to apply the preferred method of a single

  18. Survival analysis with covariates in combination with multinomial analysis to parametrize time to event for multi-state models

    NARCIS (Netherlands)

    Feenstra, T.L.; Postmus, D.; Quik, E.H.; Langendijk, H.; Krabbe, P.F.M.

    2013-01-01

    Objectives: Recent ISPOR Good practice guidelines as well as literature encourage to use a single distribution rather than the latent failure approach to model time to event for patient level simulation models with multiple competing outcomes. Aim was to apply the preferred method of a single

  19. Control of a maintenance system when failure and repair times have phase type distributions

    Science.gov (United States)

    Decassiamenesesrodrigues, Rita

    1990-09-01

    In the model of machine repair discussed there are M + R identical machines, M operating, and R spares. All machines are independent of one another. When an operating machine fails, it is sent to a single server repair station and immediately replaced by a spare machine, if one is available. The server has two available service types to choose from. There are waiting costs, repair costs, lost production costs, and switch-over costs. The following decision problem is treated: to obtain a stationary policy which determines the service type as a function of the state of the system in order to minimize the long-run average cost when failure and repair times have second-order Coxian distribution. This control problem is represented by a semi-Markov decision process. The policy-iteration algorithm and the value-iteration algorithm are used to obtain the optimal policy. Numerical results are given for these two optimization methods.

  20. Failure Models and Criteria for FRP Under In-Plane or Three-Dimensional Stress States Including Shear Non-Linearity

    Science.gov (United States)

    Pinho, Silvestre T.; Davila, C. G.; Camanho, P. P.; Iannucci, L.; Robinson, P.

    2005-01-01

    A set of three-dimensional failure criteria for laminated fiber-reinforced composites, denoted LaRC04, is proposed. The criteria are based on physical models for each failure mode and take into consideration non-linear matrix shear behaviour. The model for matrix compressive failure is based on the Mohr-Coulomb criterion and it predicts the fracture angle. Fiber kinking is triggered by an initial fiber misalignment angle and by the rotation of the fibers during compressive loading. The plane of fiber kinking is predicted by the model. LaRC04 consists of 6 expressions that can be used directly for design purposes. Several applications involving a broad range of load combinations are presented and compared to experimental data and other existing criteria. Predictions using LaRC04 correlate well with the experimental data, arguably better than most existing criteria. The good correlation seems to be attributable to the physical soundness of the underlying failure models.

  1. Applying the Seattle Heart Failure Model in the Office Setting in the Era of Electronic Medical Records.

    Science.gov (United States)

    Williams, Brent A; Agarwal, Shikhar

    2018-02-23

    Prediction models such as the Seattle Heart Failure Model (SHFM) can help guide management of heart failure (HF) patients, but the SHFM has not been validated in the office environment. This retrospective cohort study assessed the predictive performance of the SHFM among patients with new or pre-existing HF in the context of an office visit.Methods and Results:SHFM elements were ascertained through electronic medical records at an office visit. The primary outcome was all-cause mortality. A "warranty period" for the baseline SHFM risk estimate was sought by examining predictive performance over time through a series of landmark analyses. Discrimination and calibration were estimated according to the proposed warranty period. Low- and high-risk thresholds were proposed based on the distribution of SHFM estimates. Among 26,851 HF patients, 14,380 (54%) died over a mean 4.7-year follow-up period. The SHFM lost predictive performance over time, with C=0.69 and C<0.65 within 3 and beyond 12 months from baseline respectively. The diminishing predictive value was attributed to modifiable SHFM elements. Discrimination (C=0.66) and calibration for 12-month mortality were acceptable. A low-risk threshold of ∼5% mortality risk within 12 months reflects the 10% of HF patients in the office setting with the lowest risk. The SHFM has utility in the office environment.

  2. Non-integrated electricity suppliers: the failure of an organisational model

    International Nuclear Information System (INIS)

    Boroumand, R.H.

    2009-01-01

    In the reference model of market liberalization, the reference business model is the pure electricity retailer. But bankruptcy, merger or vertical integration are indicative of the failure of this organizational model and its incapacity to manage efficiently the combination of sourcing and market risks in a setting of fierce price competition. Because of the structural dimension of electricity's volume risk, a supplier's level of risk exposure is unknown ex ante and will only be revealed ex post when consumption is known. Sourcing and selling portfolios of hedging contracts are incomplete risk management tools. Consequently, physical hedging is an essential complement to portfolios of contracts to overcome the pure supplier's curse. (author)

  3. A probabilistic model for estimating the waiting time until the simultaneous collapse of two contingencies

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1991-06-01

    The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. 3 refs., 1 fig

  4. A probabilistic model for estimating the waiting time until the simultaneous collapse of two contingencies

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1991-01-01

    The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. (Author)

  5. A probabilistic model for estimating the waiting time until the simultaneous collapse of two contingencies

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1992-01-01

    The double contingency principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative and intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive for a search for a quantitative and probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies, as functions of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess the effectiveness of the DCP. (Author)

  6. Establishment of a rat model of early-stage liver failure and Th17/Treg imbalance

    OpenAIRE

    LI Dong; LU Zhonghua; GAN Jianhe

    2016-01-01

    ObjectiveTo investigate the methods for establishing a rat model of early-stage liver failure and the changes in Th17, Treg, and Th17/Treg after dexamethasone and thymosin interventions. MethodsA total of 64 rats were randomly divided into carbon tetrachloride (CCl4) group and endotoxin [lipopolysaccharide (LPS)]/D-galactosamine (D-GalN) combination group to establish the rat model of early-stage liver failure. The activities of the rats and changes in liver function and whole blood Th17 and ...

  7. Risk stratification in middle-aged patients with congestive heart failure: prospective comparison of the Heart Failure Survival Score (HFSS) and a simplified two-variable model.

    Science.gov (United States)

    Zugck, C; Krüger, C; Kell, R; Körber, S; Schellberg, D; Kübler, W; Haass, M

    2001-10-01

    The performance of a US-American scoring system (Heart Failure Survival Score, HFSS) was prospectively evaluated in a sample of ambulatory patients with congestive heart failure (CHF). Additionally, it was investigated whether the HFSS might be simplified by assessment of the distance ambulated during a 6-min walk test (6'WT) instead of determination of peak oxygen uptake (peak VO(2)). In 208 middle-aged CHF patients (age 54+/-10 years, 82% male, NYHA class 2.3+/-0.7; follow-up 28+/-14 months) the seven variables of the HFSS: CHF aetiology; heart rate; mean arterial pressure; serum sodium concentration; intraventricular conduction time; left ventricular ejection fraction (LVEF); and peak VO(2), were determined. Additionally, a 6'WT was performed. The HFSS allowed discrimination between patients at low, medium and high risk, with mortality rates of 16, 39 and 50%, respectively. However, the prognostic power of the HFSS was not superior to a two-variable model consisting only of LVEF and peak VO(2). The areas under the receiver operating curves (AUC) for prediction of 1-year survival were even higher for the two-variable model (0.84 vs. 0.74, P<0.05). Replacing peak VO(2) with 6'WT resulted in a similar AUC (0.83). The HFSS continued to predict survival when applied to this patient sample. However, the HFSS was inferior to a two-variable model containing only LVEF and either peak VO(2) or 6'WT. As the 6'WT requires no sophisticated equipment, a simplified two-variable model containing only LVEF and 6'WT may be more widely applicable, and is therefore recommended.

  8. A Failure Criterion for Concrete

    DEFF Research Database (Denmark)

    Ottosen, N. S.

    1977-01-01

    A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace in the devi......A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace...

  9. Modelling software failures of digital I and C in probabilistic safety analyses based on the TELEPERM registered XS operating experience

    International Nuclear Information System (INIS)

    Jockenhoevel-Barttfeld, Mariana; Taurines Andre; Baeckstroem, Ola; Holmberg, Jan-Erik; Porthin, Markus; Tyrvaeinen, Tero

    2015-01-01

    Digital instrumentation and control (I and C) systems appear as upgrades in existing nuclear power plants (NPPs) and in new plant designs. In order to assess the impact of digital system failures, quantifiable reliability models are needed along with data for digital systems that are compatible with existing probabilistic safety assessments (PSA). The paper focuses on the modelling of software failures of digital I and C systems in probabilistic assessments. An analysis of software faults, failures and effects is presented to derive relevant failure modes of system and application software for the PSA. The estimations of software failure probabilities are based on an analysis of the operating experience of TELEPERM registered XS (TXS). For the assessment of application software failures the analysis combines the use of the TXS operating experience at an application function level combined with conservative engineering judgments. Failure probabilities to actuate on demand and of spurious actuation of typical reactor protection application are estimated. Moreover, the paper gives guidelines for the modelling of software failures in the PSA. The strategy presented in this paper is generic and can be applied to different software platforms and their applications.

  10. Association between prehospital time interval and short-term outcome in acute heart failure patients.

    Science.gov (United States)

    Takahashi, Masashi; Kohsaka, Shun; Miyata, Hiroaki; Yoshikawa, Tsutomu; Takagi, Atsutoshi; Harada, Kazumasa; Miyamoto, Takamichi; Sakai, Tetsuo; Nagao, Ken; Sato, Naoki; Takayama, Morimasa

    2011-09-01

    Acute heart failure (AHF) is one of the most frequently encountered cardiovascular conditions that can seriously affect the patient's prognosis. However, the importance of early triage and treatment initiation in the setting of AHF has not been recognized. The Tokyo Cardiac Care Unit Network Database prospectively collected information of emergency admissions to acute cardiac care facilities in 2005-2007 from 67 participating hospitals in the Tokyo metropolitan area. We analyzed records of 1,218 AHF patients transported to medical centers via emergency medical services (EMS). AHF was defined as rapid onset or change in the signs and symptoms of heart failure, resulting in the need for urgent therapy. Patients with acute coronary syndrome were excluded from this analysis. Logistic regression analysis was performed to calculate the risk-adjusted in-hospital mortality. A majority of the patients were elderly (76.1 ± 11.5 years old) and male (54.1%). The overall in-hospital mortality rate was 6.0%. The median time interval between symptom onset and EMS arrival (response time) was 64 minutes (interquartile range [IQR] 26-205 minutes), and that between EMS arrival and ER arrival (transportation time) was 27 minutes (IQR 9-78 minutes). The risk-adjusted mortality increased with transportation time, but did not correlate with the response time. Those who took >45 minutes to arrive at the medical centers were at a higher risk for in-hospital mortality (odds ratio 2.24, 95% confidence interval 1.17-4.31; P = .015). Transportation time correlated with risk-adjusted mortality, and steps should be taken to reduce the EMS transfer time to improve the outcome in AHF patients. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. A Microstructure-Based Model to Characterize Micromechanical Parameters Controlling Compressive and Tensile Failure in Crystallized Rock

    Science.gov (United States)

    Kazerani, T.; Zhao, J.

    2014-03-01

    A discrete element model is proposed to examine rock strength and failure. The model is implemented by UDEC which is developed for this purpose. The material is represented as a collection of irregular-sized deformable particles interacting at their cohesive boundaries. The interface between two adjacent particles is viewed as a flexible contact whose stress-displacement law is assumed to control the material fracture and fragmentation process. To reproduce rock anisotropy, an innovative orthotropic cohesive law is developed for contact which allows the interfacial shear and tensile behaviours to be different from each other. The model is applied to a crystallized igneous rock and the individual and interactional effects of the microstructural parameters on the material compressive and tensile failure response are examined. A new methodical calibration process is also established. It is shown that the model successfully reproduces the rock mechanical behaviour quantitatively and qualitatively. Ultimately, the model is used to understand how and under what circumstances micro-tensile and micro-shear cracking mechanisms control the material failure at different loading paths.

  12. Economic impact of heart failure according to the effects of kidney failure.

    Science.gov (United States)

    Sicras Mainar, Antoni; Navarro Artieda, Ruth; Ibáñez Nolla, Jordi

    2015-01-01

    To evaluate the use of health care resources and their cost according to the effects of kidney failure in heart failure patients during 2-year follow-up in a population setting. Observational retrospective study based on a review of medical records. The study included patients ≥ 45 years treated for heart failure from 2008 to 2010. The patients were divided into 2 groups according to the presence/absence of KF. Main outcome variables were comorbidity, clinical status (functional class, etiology), metabolic syndrome, costs, and new cases of cardiovascular events and kidney failure. The cost model included direct and indirect health care costs. Statistical analysis included multiple regression models. The study recruited 1600 patients (prevalence, 4.0%; mean age 72.4 years; women, 59.7%). Of these patients, 70.1% had hypertension, 47.1% had dyslipidemia, and 36.2% had diabetes mellitus. We analyzed 433 patients (27.1%) with kidney failure and 1167 (72.9%) without kidney failure. Patients with kidney failure were associated with functional class III-IV (54.1% vs 40.8%) and metabolic syndrome (65.3% vs 51.9%, P<.01). The average unit cost was €10,711.40. The corrected cost in the presence of kidney failure was €14,868.20 vs €9,364.50 (P=.001). During follow-up, 11.7% patients developed ischemic heart disease, 18.8% developed kidney failure, and 36.1% developed heart failure exacerbation. Comorbidity associated with heart failure is high. The presence of kidney failure increases the use of health resources and leads to higher costs within the National Health System. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  13. Elastic deformation and failure in protein filament bundles: Atomistic simulations and coarse-grained modeling.

    Science.gov (United States)

    Hammond, Nathan A; Kamm, Roger D

    2008-07-01

    The synthetic peptide RAD16-II has shown promise in tissue engineering and drug delivery. It has been studied as a vehicle for cell delivery and controlled release of IGF-1 to repair infarcted cardiac tissue, and as a scaffold to promote capillary formation for an in vitro model of angiogenesis. The structure of RAD16-II is hierarchical, with monomers forming long beta-sheets that pair together to form filaments; filaments form bundles approximately 30-60 nm in diameter; branching networks of filament bundles form macroscopic gels. We investigate the mechanics of shearing between the two beta-sheets constituting one filament, and between cohered filaments of RAD16-II. This shear loading is found in filament bundle bending or in tensile loading of fibers composed of partial-length filaments. Molecular dynamics simulations show that time to failure is a stochastic function of applied shear stress, and that for a given loading time behavior is elastic for sufficiently small shear loads. We propose a coarse-grained model based on Langevin dynamics that matches molecular dynamics results and facilities extending simulations in space and time. The model treats a filament as an elastic string of particles, each having potential energy that is a periodic function of its position relative to the neighboring filament. With insight from these simulations, we discuss strategies for strengthening RAD16-II and similar materials.

  14. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waler, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1977-01-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure-rate parameter, lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this paper is to present a methodology which can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate, lambda, simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) = 0.50 and P(lambda less than 1.0 x 10 -5 ) = 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure-rate percentiles illustrated above, one can use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) = 0.50 and P(R(t 0 ) less than 0.99999) = 0.95 for some operating time t 0 . Also, the paper includes graphs for selected percentiles which assist an engineer in applying the methodology

  15. Day vs night : Does time of presentation matter in acute heart failure? A secondary analysis from the RELAX-AHF trial

    NARCIS (Netherlands)

    Pang, Peter S.; Teerlink, John R.; Boer-Martins, Leandro; Gimpelewicz, Claudio; Davison, Beth A.; Wang, Yi; Voors, Adriaan A.; Severin, Thomas; Ponikowski, Piotr; Hua, Tsushung A.; Greenberg, Barry H.; Filippatos, Gerasimos; Felker, G. Michael; Cotter, Gad; Metra, Marco

    Background Signs and symptoms of heart failure can occur at any time. Differences between acute heart failure (AHF) patients who present at nighttime vs daytime and their outcomes have not been well studied. Our objective was to determine if there are differences in baseline characteristics and

  16. Failure Diameter Resolution Study

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-19

    Previously the SURFplus reactive burn model was calibrated for the TATB based explosive PBX 9502. The calibration was based on fitting Pop plot data, the failure diameter and the limiting detonation speed, and curvature effect data for small curvature. The model failure diameter is determined utilizing 2-D simulations of an unconfined rate stick to find the minimum diameter for which a detonation wave propagates. Here we examine the effect of mesh resolution on an unconfined rate stick with a diameter (10mm) slightly greater than the measured failure diameter (8 to 9 mm).

  17. A competing risk model of first failure site after definitive (chemo) radiation therapy for locally advanced non-small cell lung cancer

    DEFF Research Database (Denmark)

    Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M

    2018-01-01

    INTRODUCTION: The aim of the study was to build a model of first failure site and lesion specific failure probability after definitive chemo-radiotherapy for inoperable non-small cell lung cancer (NSCLC). METHODS: We retrospectively analyzed 251 patients receiving definitive chemo......-regional failure, multivariable logistic regression was applied to assess risk of each lesion being first site of failure. The two models were used in combination to predict lesion failure probability accounting for competing events. RESULTS: Adenocarcinoma had a lower hazard ratio (HR) of loco-regional (LR...

  18. Prediction of yield and long-term failure of oriented polypropylene: kinetics and anisotropy

    NARCIS (Netherlands)

    van Erp, T.B.; Reynolds, C.T.; Peijs, T.; van Dommelen, J.A.W.; Govaert, L.E.

    2009-01-01

    The time-dependent yield and failure behavior of off-axis loaded uniaxially oriented polypropy-lene tape is investigated. The yield and failure behavior is described with an anisotropic vis-coplastic model. A viscoplastic flow rule is used with an equivalent stress, based on Hill’sanisotropic yield

  19. Reducing failures rate within the project documentation using Building Information Modelling, especially Level of Development

    Directory of Open Access Journals (Sweden)

    Prušková Kristýna

    2018-01-01

    Full Text Available Paper´s focus is on differences between traditional modelling in 2D software and modelling within the BIM technology. Research uncovers failures connected to the traditional way of designing and construction of project documentation. There are revealed and shown mismatches within the project documentation. Solution within the Building information modelling Technology is outlined. As a reference, there is used experience with design of specific building in both ways of construction of project documentation: in the way of traditional modelling and in the way when using BIM technology, especially using Level of Development. Output of this paper is pointing to benefits of using advanced technology in building design, thus Building Information Modelling, especially Level of Development, which leads to reducing failures rate within the project documentation.

  20. New finite element-based modeling of reactor core support plate failure

    Energy Technology Data Exchange (ETDEWEB)

    Pandazis, Peter; Lovasz, Liviusz [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH, Garching (Germany). Forschungszentrum; Babcsany, Boglarka [Budapest Univ. of Technology and Economics, Budapest (Hungary). Inst. of Nuclear Techniques; Hajas, Tamas

    2017-12-15

    ATHLET-CD is the severe accident module of the code system AC{sup 2} that is designed to simulate the core degradation phenomena including fission product release and transport in the reactor circuit, as well as the late phase processes in the lower plenum. In case of a severe accident degradation of the reactor core occurs, the fuel assemblies start to melt. The evolution of such processes is usually accompanied with the failure of the core support plate and relocation of the molten core to the lower plenum. Currently, the criterion for the failure of the support plate applied by ATHLET-CD is a user-defined signal which can be a specific time or process variable like mass, temperature, etc. A new method, based on FEM approach, was developed that could lead in the future to a more realistic criterion for the failure of the core support plate. This paper presents the basic idea and theory of this new method as well as preliminary verification calculations and an outlook on the planned future development.

  1. Intelligent Design and Intelligent Failure

    Science.gov (United States)

    Jerman, Gregory

    2015-01-01

    Good Evening, my name is Greg Jerman and for nearly a quarter century I have been performing failure analysis on NASA's aerospace hardware. During that time I had the distinct privilege of keeping the Space Shuttle flying for two thirds of its history. I have analyzed a wide variety of failed hardware from simple electrical cables to cryogenic fuel tanks to high temperature turbine blades. During this time I have found that for all the time we spend intelligently designing things, we need to be equally intelligent about understanding why things fail. The NASA Flight Director for Apollo 13, Gene Kranz, is best known for the expression "Failure is not an option." However, NASA history is filled with failures both large and small, so it might be more accurate to say failure is inevitable. It is how we react and learn from our failures that makes the difference.

  2. A model to predict failure of irradiated U–Mo dispersion fuel

    Energy Technology Data Exchange (ETDEWEB)

    Burkes, Douglas E., E-mail: Douglas.Burkes@pnnl.gov; Senor, David J.; Casella, Andrew M.

    2016-12-15

    Highlights: • Simple model to predict failure of dispersion fuel meat designs. • Evaluated as a function of fabrication parameters and irradiation conditions. • Predictions compare well with experimental measurements of miniature fuel plates. • Interaction layer formation reduces matrix strength and increases temperature. • Si additions to the matrix appear effective only at moderate heat flux and burnup. - Abstract: Numerous global programs are focused on the continued development of existing and new research and test reactor fuels to achieve maximum attainable uranium loadings to support the conversion of a number of the world’s remaining high-enriched uranium fueled reactors to low-enriched uranium fuel. Some of these programs are focused on development and qualification of a fuel design that consists of a uranium–molybdenum (U–Mo) alloy dispersed in an aluminum matrix as one option for reactor conversion. The current paper extends a failure model originally developed for UO{sub 2}-stainless steel dispersion fuels and uses currently available thermal–mechanical property information for the materials of interest in the currently proposed design. A number of fabrication and irradiation parameters were investigated to understand the conditions at which failure of the matrix, classified as onset of pore formation in the matrix, might occur. The results compared well with experimental observations published as part of the Reduced Enrichment for Research and Test Reactors (RERTR)-6 and -7 mini-plate experiments. Fission rate, a function of the {sup 235}U enrichment, appeared to be the most influential parameter in premature failure, mainly as a result of increased interaction layer formation and operational temperature, which coincidentally decreased the strength of the matrix and caused more rapid fission gas production and recoil into the surrounding matrix material. Addition of silicon to the matrix appeared effective at reducing the rate of

  3. Signal analysis for failure detection

    International Nuclear Information System (INIS)

    Parpaglione, M.C.; Perez, L.V.; Rubio, D.A.; Czibener, D.; D'Attellis, C.E.; Brudny, P.I.; Ruzzante, J.E.

    1994-01-01

    Several methods for analysis of acoustic emission signals are presented. They are mainly oriented to detection of changes in noisy signals and characterization of higher amplitude discrete pulses or bursts. The aim was to relate changes and events with failure, crack or wear in materials, being the final goal to obtain automatic means of detecting such changes and/or events. Performance evaluation was made using both simulated and laboratory test signals. The methods being presented are the following: 1. Application of the Hopfield Neural Network (NN) model for classifying faults in pipes and detecting wear of a bearing. 2. Application of the Kohonnen and Back Propagation Neural Network model for the same problem. 3. Application of Kalman filtering to determine time occurrence of bursts. 4. Application of a bank of Kalman filters (KF) for failure detection in pipes. 5. Study of amplitude distribution of signals for detecting changes in their shape. 6. Application of the entropy distance to measure differences between signals. (author). 10 refs, 11 figs

  4. Heart failure re-admission: measuring the ever shortening gap between repeat heart failure hospitalizations.

    Directory of Open Access Journals (Sweden)

    Jeffrey A Bakal

    Full Text Available Many quality-of-care and risk prediction metrics rely on time to first rehospitalization even though heart failure (HF patients may undergo several repeat hospitalizations. The aim of this study is to compare repeat hospitalization models. Using a population-based cohort of 40,667 patients, we examined both HF and all cause re-hospitalizations using up to five years of follow-up. Two models were examined: the gap-time model which estimates the adjusted time between hospitalizations and a multistate model which considered patients to be in one of four states; community-dwelling, in hospital for HF, in hospital for any reason, or dead. The transition probabilities and times were then modeled using patient characteristics and number of repeat hospitalizations. We found that during the five years of follow-up roughly half of the patients returned for a subsequent hospitalization for each repeat hospitalization. Additionally, we noted that the unadjusted time between hospitalizations was reduced ∼40% between each successive hospitalization. After adjustment each additional hospitalization was associated with a 28 day (95% CI: 22-35 reduction in time spent out of hospital. A similar pattern was seen when considering the four state model. A large proportion of patients had multiple repeat hospitalizations. Extending the gap between hospitalizations should be an important goal of treatment evaluation.

  5. Failure propagation tests and analysis at PNC

    International Nuclear Information System (INIS)

    Tanabe, H.; Miyake, O.; Daigo, Y.; Sato, M.

    1984-01-01

    Failure propagation tests have been conducted using the Large Leak Sodium Water Reaction Test Rig (SWAT-1) and the Steam Generator Safety Test Facility (SWAT-3) at PNC in order to establish the safety design of the LMFBR prototype Monju steam generators. Test objectives are to provide data for selecting a design basis leak (DBL), data on the time history of failure propagations, data on the mechanism of the failures, and data on re-use of tubes in the steam generators that have suffered leaks. Eighteen fundamental tests have been performed in an intermediate leak region using the SWAT-1 test rig, and ten failure propagation tests have been conducted in the region from a small leak to a large leak using the SWAT-3 test facility. From the test results it was concluded that a dominant mechanism was tube wastage, and it took more than one minute until each failure propagation occurred. Also, the total leak rate in full sequence simulation tests including a water dump was far less than that of one double-ended-guillotine (DEG) failure. Using such experimental data, a computer code, LEAP (Leak Enlargement and Propagation), has been developed for the purpose of estimating the possible maximum leak rate due to failure propagation. This paper describes the results of the failure propagation tests and the model structure and validation studies of the LEAP code. (author)

  6. Transit-time flow measurement as a predictor of coronary bypass graft failure at one year angiographic follow-up

    DEFF Research Database (Denmark)

    Lehnert, Per; Møller, Christian H; Damgaard, Sune

    2015-01-01

    on graft vessel type, anastomatic configuration, and coronary artery size. RESULTS: Nine hundred eighty-two coronary anastomoses were performed of which 12% had signs of graft failure at one year angiographic follow-up. In internal mammary arteries (IMAs), analysis showed a 4% decrease in graft failure......BACKGROUND: Transit-time flow measurement (TTFM) is a commonly used intraoperative method for evaluation of coronary artery bypass graft (CABG) anastomoses. This study was undertaken to determine whether TTFM can also be used to predict graft patency at one year postsurgery. METHODS: Three hundred...... forty-five CABG patients with intraoperative graft flow measurements and one year angiographic follow-up were analyzed. Graft failure was defined as more than 50% stenosis including the "string sign." Logistic regression analysis was used to analyze the risk of graft failure after one year based...

  7. Evaluation of mean time between forced outage for reactor protection system using RBD and failure rate

    International Nuclear Information System (INIS)

    Lee, D. Y.; Park, J. H.; Hwang, I. K.; Cha, K. H.; Choi, J. K.; Lee, K. Y.; Park, J. K.

    2001-01-01

    The design life of nuclear power plants (NPPs) under recent construction is about fifty to sixty years. However, the duration that equipments of control systems operate without failures is at most five to ten years. Design for diversity and adequate maintenance strategy are required for NPP protection system in order to use the control equipment which has shorter life time than the design life of NPP. Fault Tree Analysis (FTA) technique, which has been applied to Probabilistics Safety Analysis (PSA), has been introduced to quantitatively evaluate the reliability of NPP I and C systems. The FTA, however, cannot properly consider the effect of maintenance. In this work, we have reviewed quantitative reliability evaluation techniques using the reliability block diagram and failure rates and applied it to the evaluation of mean time between forced outage for reactor protection system

  8. On the estimation of failure rates for living PSAs in the presence of model uncertainty

    International Nuclear Information System (INIS)

    Arsenis, S.P.

    1994-01-01

    The estimation of failure rates of heterogeneous Poisson components from data on times operated to failures is reviewed. Particular emphasis is given to the lack of knowledge on the form of the mixing distribution or population variability curve. A new nonparametric epirical Bayes estimation is proposed which generalizes the estimator of Robbins for different times of observations for the components. The behavior of the estimator is discussed by reference to two samples typically drawn from the CEDB, a component event database designed and operated by the Ispra JRC

  9. Information Technology Management System: an Analysis on Computational Model Failures for Fleet Management

    Directory of Open Access Journals (Sweden)

    Jayr Figueiredo de Oliveira

    2013-10-01

    Full Text Available This article proposes an information technology model to evaluate fleet management failure. Qualitative research done by a case study within an Interstate Transport company in a São Paulo State proposed to establish a relationship between computer tools and valid trustworthy information needs, and within an acceptable timeframe, for decision making, reliability, availability and system management. Additionally, the study aimed to provide relevant and precise information, in order to minimize and mitigate failure actions that may occur, compromising all operational organization base functioning.

  10. Economic sustainability in franchising: a model to predict franchisor success or failure

    OpenAIRE

    Calderón Monge, Esther; Pastor Sanz, Ivan .; Huerta Zavala, Pilar Angélica

    2017-01-01

    As a business model, franchising makes a major contribution to gross domestic product (GDP). A model that predicts franchisor success or failure is therefore necessary to ensure economic sustainability. In this study, such a model was developed by applying Lasso regression to a sample of franchises operating between 2002 and 2013. For franchises with the highest likelihood of survival, the franchise fees and the ratio of company-owned to franchised outlets were suited to the age ...

  11. Semiconductor failure threshold estimation problem in electromagnetic assessment

    International Nuclear Information System (INIS)

    Enlow, E.W.; Wunsch, D.C.

    1984-01-01

    Present semiconductor failure models to predict the one-microsecond square-wave power failure level for use with system electromagnetic (EM) assessments and hardening design are incomplete. This is because for a majority of device types there is insufficient data readily available in a composite data source to quantify the model parameters and the inaccuracy of the models cause complications in definition of adequate hardness margins and quantification of EM performance. This paper presents new semiconductor failure models which use a generic approach that are an integration and simplification of many present models. This generic approach uses two categorical models: one for diodes and transistors, and one for integrated circuits. The models were constructed from a large database of semiconductor failure data. The approach used for constructing diode and transistor failure level models is based on device rated power and are simple to use and universally applicable. The model predicts the value of the 1 μ second failure power to be used in the power failure models P = Kt /SUP -1/2/ or P = K 1 t -1 + K 2 t /SUP -1/2/ + K 3

  12. Discordance between 'actual' and 'scheduled' check-in times at a heart failure clinic.

    Directory of Open Access Journals (Sweden)

    Eiran Z Gorodeski

    Full Text Available A 2015 Institute Of Medicine statement "Transforming Health Care Scheduling and Access: Getting to Now", has increased concerns regarding patient wait times. Although waiting times have been widely studied, little attention has been paid to the role of patient arrival times as a component of this phenomenon. To this end, we investigated patterns of patient arrival at scheduled ambulatory heart failure (HF clinic appointments and studied its predictors. We hypothesized that patients are more likely to arrive later than scheduled, with progressively later arrivals later in the day.Using a business intelligence database we identified 6,194 unique patients that visited the Cleveland Clinic Main Campus HF clinic between January, 2015 and January, 2017. This clinic served both as a tertiary referral center and a community HF clinic. Transplant and left ventricular assist device (LVAD visits were excluded. Punctuality was defined as the difference between 'actual' and 'scheduled' check-in times, whereby negative values (i.e., early punctuality were patients who checked-in early. Contrary to our hypothesis, we found that patients checked-in late only a minority of the time (38% of visits. Additionally, examining punctuality by appointment hour slot we found that patients scheduled after 8AM had progressively earlier check-in times as the day progressed (P < .001 for trend. In both a Random Forest-Regression framework and linear regression models the most important risk-adjusted predictors of early punctuality were: later in the day appointment hour slot, patient having previously been to the hospital, age in the early 70s, and white race.Patients attending a mixed population ambulatory HF clinic check-in earlier than scheduled times, with progressive discrepant intervals throughout the day. This finding may have significant implications for provider utilization and resource planning in order to maximize clinic efficiency. The impact of elective early

  13. Metabolic determinants of electrical failure in ex-vivo canine model of cardiac arrest: evidence for the protective role of inorganic pyrophosphate.

    Directory of Open Access Journals (Sweden)

    Junko Shibayama

    Full Text Available Deterioration of ventricular fibrillation (VF into asystole or severe bradycardia (electrical failure heralds a fatal outcome of cardiac arrest. The role of metabolism in the timing of electrical failure remains unknown.To determine metabolic factors of early electrical failure in an ex-vivo canine model of cardiac arrest (VF+global ischemia.Metabolomic screening was performed in left ventricular biopsies collected before and after 0.3, 2, 5, 10 and 20 min of VF and global ischemia. Electrical activity was monitored via plunge needle electrodes and pseudo-ECG. Four out of nine hearts exhibited electrical failure at 10.1±0.9 min (early-asys, while 5/9 hearts maintained VF for at least 19.7 min (late-asys. As compared to late-asys, early-asys hearts had more ADP, less phosphocreatine, and higher levels of lactate at some time points during VF/ischemia (all comparisons p<0.05. Pre-ischemic samples from late-asys hearts contained ∼25 times more inorganic pyrophosphate (PPi than early-asys hearts. A mechanistic role of PPi in cardioprotection was then tested by monitoring mitochondrial membrane potential (ΔΨ during 20 min of simulated-demand ischemia using potentiometric probe TMRM in rabbit adult ventricular myocytes incubated with PPi versus control group. Untreated myocytes experienced significant loss of ΔΨ while in the PPi-treated myocytes ΔΨ was relatively maintained throughout 20 min of simulated-demand ischemia as compared to control (p<0.05.High tissue level of PPi may prevent ΔΨm loss and electrical failure at the early phase of ischemic stress. The link between the two protective effects may involve decreased rates of mitochondrial ATP hydrolysis and lactate accumulation.

  14. Estimating flood inundation caused by dam failures

    Energy Technology Data Exchange (ETDEWEB)

    Mocan, N. [Crozier and Associates Inc., Collingwood, ON (Canada); Joy, D.M. [Guelph Univ., ON (Canada). School of Engineering; Rungis, G. [Grand River Conservation Authority, Cambridge, ON (Canada)

    2006-01-15

    Recent advancements in modelling inundation due to dam failures have allowed easier and more illustrative analyses of potential outcomes. This paper described new model and mapping capabilities available using the HEC-RAS hydraulic model in concert with geographic information systems (GIS). The study area was the upper reaches of Canagagigue Creek and the Woolwich Dam near Elmira, Ontario. A hydraulic analysis of a hypothetical dam failure was developed based on the summer probable maximum flood (PMF) event. Limits extended from Woolwich Dam to downstream of the Town of Elmira. An incoming summer PMF hydrograph was set as the upstream boundary condition in the upstream model. Simulation parameters include simulation time-step; implicit weighting factor; water surface calculation tolerance; and output calculation interval. Peak flows were presented, as well as corresponding flood inundation results through the Town of Elmira. The hydraulic model results were exported to a GIS in order to develop inundation maps for emergency management planning. Results from post-processing included inundation maps for each of the simulated time-steps as well as an inundation animation for the duration of the dam breach. It was concluded that the modelling tools presented in the study can be applied to other dam safety assessment projects in order to develop effective and efficient emergency preparedness plans through public consultation and the establishment of impact zones. 1 tab., 2 figs.

  15. A review of macroscopic ductile failure criteria.

    Energy Technology Data Exchange (ETDEWEB)

    Corona, Edmundo; Reedlunn, Benjamin

    2013-09-01

    The objective of this work was to describe several of the ductile failure criteria com- monly used to solve practical problems. The following failure models were considered: equivalent plastic strain, equivalent plastic strain in tension, maximum shear, Mohr- Coulomb, Wellman's tearing parameter, Johnson-Cook and BCJ MEM. The document presents the main characteristics of each failure model as well as sample failure predic- tions for simple proportional loading stress histories in three dimensions and in plane stress. Plasticity calculations prior to failure were conducted with a simple, linear hardening, J2 plasticity model. The resulting failure envelopes were plotted in prin- cipal stress space and plastic strain space, where the dependence on stress triaxiality and Lode angle are clearly visible. This information may help analysts select a ductile fracture model for a practical problem and help interpret analysis results.

  16. Time to failure and neuromuscular response to intermittent isometric exercise at different levels of vascular occlusion: a randomized crossover study

    Directory of Open Access Journals (Sweden)

    Mikhail Santos Cerqueira

    2017-04-01

    Full Text Available Objectives: The purpose this study was investigate the effects of different vascular occlusion levels (total occlusion (TO, partial occlusion (PO or free flow (FF during intermittent isometric handgrip exercise (IIHE on the time to failure (TF and the recovery of the maximum voluntary isometric force (MVIF, median frequency (EMGFmed and peak of EMG signal (EMGpeak after failure.  Methods: Thirteen healthy men (21 ± 1.71 year carried out an IIHE until the failure at 45% of MVIF with TO, PO or FF. Occlusion pressure was determined previously to the exercise. The MVIF, EMGFmed and EMGpeak were measured before and after exercise. Results: TF (in seconds was significantly different (p < 0.05 among all investigated conditions: TO (150 ± 68, PO (390 ± 210 and FF (510 ± 240. The MVIF was lower immediately after IIHE, remaining lower eleven minutes after failure in all cases (p <0.05, when compared to pre exercise. There was a greater force reduction (p <0.05 one minute after the failure in the PO (-45.8% and FF (-39.9% conditions, when compared to TO (-28.1%. Only the PO condition caused lower MVIF (p <0.05 than in the OT, eleven minutes after the task failure. PO caused a greater reduction in EMGFmed compared TO and greater increase in EMGpeak, when compared to TO and FF (p <0.05. Conclusions: TO during IIHE lead to a lower time to failure, but a faster MVIF recovery, while the PO seems to be associated to a slower neuromuscular recovery, when compared to other conditions.

  17. Voltage stress effects on microcircuit accelerated life test failure rates

    Science.gov (United States)

    Johnson, G. M.

    1976-01-01

    The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.

  18. A Large-scale Finite Element Model on Micromechanical Damage and Failure of Carbon Fiber/Epoxy Composites Including Thermal Residual Stress

    Science.gov (United States)

    Liu, P. F.; Li, X. K.

    2018-06-01

    The purpose of this paper is to study micromechanical progressive failure properties of carbon fiber/epoxy composites with thermal residual stress by finite element analysis (FEA). Composite microstructures with hexagonal fiber distribution are used for the representative volume element (RVE), where an initial fiber breakage is assumed. Fiber breakage with random fiber strength is predicted using Monte Carlo simulation, progressive matrix damage is predicted by proposing a continuum damage mechanics model and interface failure is simulated using Xu and Needleman's cohesive model. Temperature dependent thermal expansion coefficients for epoxy matrix are used. FEA by developing numerical codes using ANSYS finite element software is divided into two steps: 1. Thermal residual stresses due to mismatch between fiber and matrix are calculated; 2. Longitudinal tensile load is further exerted on the RVE to perform progressive failure analysis of carbon fiber/epoxy composites. Numerical convergence is solved by introducing the viscous damping effect properly. The extended Mori-Tanaka method that considers interface debonding is used to get homogenized mechanical responses of composites. Three main results by FEA are obtained: 1. the real-time matrix cracking, fiber breakage and interface debonding with increasing tensile strain is simulated. 2. the stress concentration coefficients on neighbouring fibers near the initial broken fiber and the axial fiber stress distribution along the broken fiber are predicted, compared with the results using the global and local load-sharing models based on the shear-lag theory. 3. the tensile strength of composite by FEA is compared with those by the shear-lag theory and experiments. Finally, the tensile stress-strain curve of composites by FEA is applied to the progressive failure analysis of composite pressure vessel.

  19. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  20. FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)

    Science.gov (United States)

    Pack, G.

    1994-01-01

    The Failure Environment Analysis Tool, FEAT, enables people to see and better understand the effects of failures in a system. FEAT uses digraph models to determine what will happen to a system if a set of failure events occurs and to identify the possible causes of a selected set of failures. Failures can be user-selected from either engineering schematic or digraph model graphics, and the effects or potential causes of the failures will be color highlighted on the same schematic or model graphic. As a design tool, FEAT helps design reviewers understand exactly what redundancies have been built into a system and where weaknesses need to be protected or designed out. A properly developed digraph will reflect how a system functionally degrades as failures accumulate. FEAT is also useful in operations, where it can help identify causes of failures after they occur. Finally, FEAT is valuable both in conceptual development and as a training aid, since digraphs can identify weaknesses in scenarios as well as hardware. Digraphs models for use with FEAT are generally built with the Digraph Editor, a Macintosh-based application which is distributed with FEAT. The Digraph Editor was developed specifically with the needs of FEAT users in mind and offers several time-saving features. It includes an icon toolbox of components required in a digraph model and a menu of functions for manipulating these components. It also offers FEAT users a convenient way to attach a formatted textual description to each digraph node. FEAT needs these node descriptions in order to recognize nodes and propagate failures within the digraph. FEAT users store their node descriptions in modelling tables using any word processing or spreadsheet package capable of saving data to an ASCII text file. From within the Digraph Editor they can then interactively attach a properly formatted textual description to each node in a digraph. Once descriptions are attached to them, a selected set of nodes can be

  1. A hybrid feature selection and health indicator construction scheme for delay-time-based degradation modelling of rolling element bearings

    Science.gov (United States)

    Zhang, Bin; Deng, Congying; Zhang, Yi

    2018-03-01

    Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.

  2. Sensitivity analysis of repairable redundant system with switching failure and geometric reneging

    Directory of Open Access Journals (Sweden)

    Chandra Shekhar

    2017-09-01

    Full Text Available This study deals with the performance modeling and reliability analysis of a redundant machining system composed of several functional machines. To analyze the more realistic scenarios, the concepts of switching failure and geometric reneging are included. The time-to-breakdown and repair time of operating and standby machines are assumed to follow the exponential distribution. For the quantitative assessment of the machine interference problem, various performance measures such as mean-time-to-failure, reliability, reneging rate, etc. have been formulated. To show the practicability of the developed model, a numerical illustration has been presented. For the practical justification and validity of the results established, the sensitivity analysis of reliability indices has been presented by varying different system descriptors.

  3. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, we use an IFR distribution to develop a reliability model for the EBS

  4. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, an IFR distribution is used to develop a reliability model for the EBS

  5. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waller, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1976-07-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure rate lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this report is to present a methodology that can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate lambda simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) equals 0.50 and P(lambda less than 1.0 x 10 -5 ) equals 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure rate percentiles illustrated above, it is possible to use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) equals 0.50 and P(R(t 0 ) less than 0.99999) equals 0.95, for some operating time t 0 . The report also includes graphs for selected percentiles which assist an engineer in applying the procedure. 28 figures, 16 tables

  6. Analysis and Characterization of Damage and Failure Utilizing a Generalized Composite Material Model Suitable for Use in Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Khaled, Bilal; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in state-of-the art composite impact models is under development. In particular, a next generation composite impact material model, jointly developed by the FAA and NASA, is being implemented into the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage, and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters (such as modulus and strength). The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in the various coordinate directions. Due to the fact that the plasticity and damage models are uncoupled, test procedures and methods to both characterize the damage model and to covert the material stress-strain curves from the true (damaged) stress space to the effective (undamaged) stress space have been developed. A methodology has been developed to input the experimentally determined composite failure surface in a tabulated manner. An analytical approach is then utilized to track how close the current stress state is to the failure surface.

  7. Neurological Disorders in a Murine Model of Chronic Renal Failure

    Directory of Open Access Journals (Sweden)

    Jean-Marc Chillon

    2014-01-01

    Full Text Available Cardiovascular disease is highly prevalent in patients with chronic renal failure (CRF. However, data on the impact of CRF on the cerebral circulatory system are scarce—despite the fact that stroke is the third most common cause of cardiovascular death in people with CRF. In the present study, we examined the impact of CRF on behavior (anxiety, recognition and ischemic stroke severity in a well-defined murine model of CRF. We did not observe any significant increases between CRF mice and non-CRF mice in terms of anxiety. In contrast, CRF mice showed lower levels of anxiety in some tests. Recognition was not impaired (vs. controls after 6 weeks of CRF but was impaired after 10 weeks of CRF. Chronic renal failure enhances the severity of ischemic stroke, as evaluated by the infarct volume size in CRF mice after 34 weeks of CRF. Furthermore, neurological test results in non-CRF mice tended to improve in the days following ischemic stroke, whereas the results in CRF mice tended to worsen. In conclusion, we showed that a murine model of CRF is suitable for evaluating uremic toxicity and the associated neurological disorders. Our data confirm the role of uremic toxicity in the genesis of neurological abnormalities (other than anxiety.

  8. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  9. Numerical modelling of solid transport caused by an extreme flood: Case of the Hamiz dam failure (Algeria

    Directory of Open Access Journals (Sweden)

    Haddad Ali

    2017-07-01

    Full Text Available Study of solid transport caused by the flow of an extreme flood such as the propagation of dam failure wave aims to simulate the hydrodynamics behaviour of the solid particles contained in the valley during the flood passage. With this intention, we have developed a numerical model which is based on the resolution of the one-dimensional Saint Venant–Exner equations by the implicit finite difference scheme. Numerical stability of liquid phase calculation is checked by the Courant number and De Vries condition for the solid phase. The model has been applied to the Hamiz dam (Algeria which is built in the semi arid zone and presents a major risk of failure. The simulation of several scenarios of dam failure has allowed us to trace the cartography of sediment transport in the valley which is induced by the flood of dam failure.

  10. Estimation of mean time to failure of a near surface radioactive waste repository for PWR power stations

    International Nuclear Information System (INIS)

    Aguiar, Lais A. de; Frutuoso e Melo, P.F.; Alvim, Antonio C.M.

    2007-01-01

    This work aims at estimating the mean time to failure (MTTF) of each barrier of a near surface radioactive waste repository. It is assumed that surface water infiltrates through the barriers, reaching the matrix where radionuclides are contained, releasing them to the environment. Radioactive wastes considered in this work are low and medium level wastes (produced during operation of a PWR nuclear power station) fixed on cement. The repository consists of 6 saturated porous media barriers (top cover, upper layer, packages, basis, repository walls and geosphere). It has been verified that the mean time to failure (MTTF) of each barrier increases for radionuclides having higher retardation factor (Fr) and also that the MTTF for concrete is larger for Nickel , while for the geosphere, Plutonium gives the largest MTTF. (author)

  11. Using Probablilistic Risk Assessment to Model Medication System Failures in Long-Term Care Facilities

    National Research Council Canada - National Science Library

    Comden, Sharon C; Marx, David; Murphy-Carley, Margaret; Hale, Misti

    2005-01-01

    .... Discussion: The models provide contextual maps of the errors and behaviors that lead to medication delivery system failures, including unanticipated risks associated with regulatory practices and common...

  12. Influence of reinforcement's corrosion into hyperstatic reinforced concrete beams: a probabilistic failure scenarios analysis

    Directory of Open Access Journals (Sweden)

    G. P. PELLIZZER

    Full Text Available AbstractThis work aims to study the mechanical effects of reinforcement's corrosion in hyperstatic reinforced concrete beams. The focus is the probabilistic determination of individual failure scenarios change as well as global failure change along time. The limit state functions assumed describe analytically bending and shear resistance of reinforced concrete rectangular cross sections as a function of steel and concrete resistance and section dimensions. It was incorporated empirical laws that penalize the steel yield stress and the reinforcement's area along time in addition to Fick's law, which models the chloride penetration into concrete pores. The reliability theory was applied based on Monte Carlo simulation method, which assesses each individual probability of failure. The probability of global structural failure was determined based in the concept of failure tree. The results of a hyperstatic reinforced concrete beam showed that reinforcements corrosion make change into the failure scenarios modes. Therefore, unimportant failure modes in design phase become important after corrosion start.

  13. In the Dark Shadow of the Supercycle Tailings Failure Risk & Public Liability Reach All Time Highs

    Directory of Open Access Journals (Sweden)

    Lindsay Newland Bowker

    2017-10-01

    Full Text Available This is the third in a series of independent research papers attempting to improve the quality of descriptive data and analysis of tailings facility failures globally focusing on the relative occurrence, severity and root causes of these failures. This paper updates previously published failures data through 2010 with both additional data pre-2010 and additional data 2010–2015. All three papers have explored the connection between high public consequence failure trends and mining economics trends especially grade, costs to produce and price. This work, the third paper, looks more deeply at that connection through several autopsies of the dysfunctional economics of the period 2000–2010 in which the greatest and longest price increase in recorded history co-occurred across all commodities, a phenomenon sometimes called a supercycle. That high severity failures reached all-time highs in the same decade as prices rose to highs, unprecedented since 1916, challenges many fundamental beliefs and assumptions that have governed modern mining operations, investment decisions, and regulation. It is from waste management in mining, a non-revenue producing cost incurring part of every operation, that virtually all severe environmental and community damages arise. These damages are now more frequently at a scale and of a nature that is non-remediable and beyond any possibility of clean up or reclamation. The authors have jointly undertaken this work in the public interest without funding from the mining industry, regulators, non-governmental organizations, or from any other source.

  14. Failure detection by adaptive lattice modelling using Kalman filtering methodology : application to NPP

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1991-03-01

    Detection of failure in the operational status of a NPP is described. The method uses lattice form of the signal modelling established by means of Kalman filtering methodology. In this approach each lattice parameter is considered to be a state and the minimum variance estimate of the states is performed adaptively by optimal parameter estimation together with fast convergence and favourable statistical properties. In particular, the state covariance is also the covariance of the error committed by that estimate of the state value and the Mahalanobis distance formed for pattern comparison takes x 2 distribution for normally distributed signals. The failure detection is performed after a decision making process by probabilistic assessments based on the statistical information provided. The failure detection system is implemented in multi-channel signal environment of Borssele NPP and its favourable features are demonstrated. (author). 29 refs.; 7 figs

  15. Adjustment and Characterization of an Original Model of Chronic Ischemic Heart Failure in Pig

    Directory of Open Access Journals (Sweden)

    Laurent Barandon

    2010-01-01

    Full Text Available We present and characterize an original experimental model to create a chronic ischemic heart failure in pig. Two ameroid constrictors were placed around the LAD and the circumflex artery. Two months after surgery, pigs presented a poor LV function associated with a severe mitral valve insufficiency. Echocardiography analysis showed substantial anomalies in radial and circumferential deformations, both on the anterior and lateral surface of the heart. These anomalies in function were coupled with anomalies of perfusion observed in echocardiography after injection of contrast medium. No demonstration of myocardial infarction was observed with histological analysis. Our findings suggest that we were able to create and to stabilize a chronic ischemic heart failure model in the pig. This model represents a useful tool for the development of new medical or surgical treatment in this field.

  16. Continuous-Time Semi-Markov Models in Health Economic Decision Making : An Illustrative Example in Heart Failure Disease Management

    NARCIS (Netherlands)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease

  17. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    Science.gov (United States)

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  18. 20 CFR 404.1267 - Failure to make timely payments-for wages paid prior to 1987.

    Science.gov (United States)

    2010-04-01

    ... Governments If A State Fails to Make Timely Payments-for Wages Paid Prior to 1987 § 404.1267 Failure to make... to the State under the other provision of the Social Security Act. [53 FR 32976, Aug. 29, 1988, as... paid prior to 1987. 404.1267 Section 404.1267 Employees' Benefits SOCIAL SECURITY ADMINISTRATION...

  19. Application of different failure criteria in fuel pin modelling and consequences for overpower transients in LMFBRs

    International Nuclear Information System (INIS)

    Kuczera, B.; Royl, P.

    1975-01-01

    The CAPRI-2 code system for analysis of hypothetical core disruptive accidents in LMFBRs has recently been coupled with the transient deformation model BREDA-2. The new code system determines thermal and mechanical loads under transient conditions for both, fresh and irradiated fuel and cladding, taking into account fuel restructuring as well as effects from fission gas and fuel and clad swelling. The system has been used for analysis of mild uncontrolled overpower transients in the SNR-300 to predict failure, and to initialize and calculate subsequent fuel coolant interaction (FCI). Thirteen channels have been coupled by point kinetics for the whole core analysis. Three different failure mechanisms and their influence on accident sequence have been investigated: clad melt-through; clad burst caused by internal pressure build-up; clad straining due to differential thermal expansion between fuel and clad cylinders. The results of these analyses show that each failure mechanism will lead to rather different failure and accident sequences. There is still a lack of experimental data from which failure thresholds can be derived. To get better predictions from the applied models an improved understanding of fission release and its relation to fuel porosity also some better experimental data on fluence and temperature dependent rupture strains of the cladding material should be available

  20. A model for predicting embankment slope failures in clay-rich soils; A Louisiana example

    Science.gov (United States)

    Burns, S. F.

    2015-12-01

    A model for predicting embankment slope failures in clay-rich soils; A Louisiana example It is well known that smectite-rich soils significantly reduce the stability of slopes. The question is how much smectite in the soil causes slope failures. A study of over 100 sites in north and south Louisiana, USA, compared slopes that failed during a major El Nino winter (heavy rainfall) in 1982-1983 to similar slopes that did not fail. Soils in the slopes were tested for per cent clay, liquid limits, plasticity indices and semi-quantitative clay mineralogy. Slopes with the High Risk for failure (85-90% chance of failure in 8-15 years after construction) contained soils with a liquid limit > 54%, a plasticity index over 29%, and clay contents > 47%. Slopes with an Intermediate Risk (55-50% chance of failure in 8-15 years) contained soils with a liquid limit between 36-54%, plasticity index between 16-19%, and clay content between 32-47%. Slopes with a Low Risk chance of failure (soils with a liquid limit plasticity index soil characteristics before construction. If the soils fall into the Low Risk classification, construct the embankment normally. If the soils fall into the High Risk classification, one will need to use lime stabilization or heat treatments to prevent failures. Soils in the Intermediate Risk class will have to be evaluated on a case by case basis.

  1. Definition of ACLF and inclusion criteria for extra-hepatic organ failure.

    Science.gov (United States)

    Wang, Xiaojing; Sarin, Shiv Kumar; Ning, Qin

    2015-07-01

    A prominent characteristic of ACLF is rapid hepatic disease progression with subsequent extra-hepatic organ failure, manifesting as either hepatic coma or hepatorenal syndrome, which is associated with a high mortality rate in a short time. The APASL definition mainly emphasizes recognizing patients with hepatic failure. These patients may subsequently develop extra-hepatic multisystem organ failure leading to high mortality. It is therefore worthwhile to identify the short interim period between the development of liver failure and the onset of extra-hepatic organ failure, the potential therapeutic 'golden window.' Interventions during this period may prevent the development of complications and eventually change the course of the illness. Organ failure is suggested to be a central component of ACLF and may behave differently from chronic decompensated liver disease. Clear and practical criteria for the inclusion of organ failure are urgently needed so that patients with these life-threatening complications can be treated in a timely and appropriate manner. Recent studies suggested that the scoring systems evaluating organ failure [acute physiology, age and chronic health evaluation (APACHE) and sequential organ failure assessment (SOFA) scores] work better than those addressing the severity of liver disease [Child-Pugh and model of end-stage liver disease (MELD) scores] in ACLF. However, a key problem remains that the former scoring systems are reflective of organ failure and not predictive, thus limiting their value as an early indication for intervention.

  2. Computer-assisted imaging algorithms facilitate histomorphometric quantification of kidney damage in rodent renal failure models

    Directory of Open Access Journals (Sweden)

    Marcin Klapczynski

    2012-01-01

    Full Text Available Introduction: Surgical 5/6 nephrectomy and adenine-induced kidney failure in rats are frequently used models of progressive renal failure. In both models, rats develop significant morphological changes in the kidneys and quantification of these changes can be used to measure the efficacy of prophylactic or therapeutic approaches. In this study, the Aperio Genie Pattern Recognition technology, along with the Positive Pixel Count, Nuclear and Rare Event algorithms were used to quantify histological changes in both rat renal failure models. Methods: Analysis was performed on digitized slides of whole kidney sagittal sections stained with either hematoxylin and eosin or immunohistochemistry with an anti-nestin antibody to identify glomeruli, regenerating tubular epithelium, and tubulointerstitial myofibroblasts. An anti-polymorphonuclear neutrophil (PMN antibody was also used to investigate neutrophil tissue infiltration. Results: Image analysis allowed for rapid and accurate quantification of relevant histopathologic changes such as increased cellularity and expansion of glomeruli, renal tubular dilatation, and degeneration, tissue inflammation, and mineral aggregation. The algorithms provided reliable and consistent results in both control and experimental groups and presented a quantifiable degree of damage associated with each model. Conclusion: These algorithms represent useful tools for the uniform and reproducible characterization of common histomorphologic features of renal injury in rats.

  3. Definition of containment failure

    International Nuclear Information System (INIS)

    Cybulskis, P.

    1982-01-01

    Core meltdown accidents of the types considered in probabilistic risk assessments (PRA's) have been predicted to lead to pressures that will challenge the integrity of containment structures. Review of a number of PRA's indicates considerable variation in the predicted probability of containment failure as a function of pressure. Since the results of PRA's are sensitive to the prediction of the occurrence and the timing of containment failure, better understanding of realistic containment capabilities and a more consistent approach to the definition of containment failure pressures are required. Additionally, since the size and location of the failure can also significantly influence the prediction of reactor accident risk, further understanding of likely failure modes is required. The thresholds and modes of containment failure may not be independent

  4. Early Treatment Outcome in Failure to Thrive: Predictions from a Transactional Model.

    Science.gov (United States)

    Drotar, Dennis

    Children diagnosed with environmentally based failure to thrive early during their first year of life were seen at 12 and 18 months for assessment of psychological development (cognition, language, symbolic play, and behavior during testing). Based on a transactional model of outcome, factors reflecting biological vulnerability (wasting and…

  5. Organization-and-technological model of medical care delivered to patients with chronic heart failure

    Directory of Open Access Journals (Sweden)

    Kiselev A.R.

    2014-09-01

    Full Text Available Organization-and-technological model of medical care delivered to patients with chronic heart failure based on IDEF0 methodology and corresponded with clinical guidelines is presented.

  6. Constitutive modeling of void-growth-based tensile ductile failures with stress triaxiality effects

    KAUST Repository

    Mora Cordova, Angel

    2014-07-01

    In most metals and alloys, the evolution of voids has been generally recognized as the basic failure mechanism. Furthermore, stress triaxiality has been found to influence void growth dramatically. Besides strain intensity, it is understood to be the most important factor that controls the initiation of ductile fracture. We include sensitivity of stress triaxiality in a variational porous plasticity model, which was originally derived from hydrostatic expansion. Under loading conditions rather than hydrostatic deformation, we allow the critical pressure for voids to be exceeded so that the growth due to plasticity becomes dependent on the stress triaxiality. The limitations of the spherical void growth assumption are investigated. Our improved constitutive model is validated through good agreements with experimental data. Its capacity for reproducing realistic failure patterns is also indicated by a numerical simulation of a compact tensile (CT) test. © 2013 Elsevier Inc.

  7. Enhanced stability of steep channel beds to mass failure and debris flow initiation

    Science.gov (United States)

    Prancevic, J.; Lamb, M. P.; Ayoub, F.; Venditti, J. G.

    2015-12-01

    Debris flows dominate bedrock erosion and sediment transport in very steep mountain channels, and are often initiated from failure of channel-bed alluvium during storms. While several theoretical models exist to predict mass failures, few have been tested because observations of in-channel bed failures are extremely limited. To fill this gap in our understanding, we performed laboratory flume experiments to identify the conditions necessary to initiate bed failures in non-cohesive sediment of different sizes (D = 0.7 mm to 15 mm) on steep channel-bed slopes (S = 0.45 to 0.93) and in the presence of water flow. In beds composed of sand, failures occurred under sub-saturated conditions on steep bed slopes (S > 0.5) and under super-saturated conditions at lower slopes. In beds of gravel, however, failures occurred only under super-saturated conditions at all tested slopes, even those approaching the dry angle of repose. Consistent with theoretical models, mass failures under super-saturated conditions initiated along a failure plane approximately one grain-diameter below the bed surface, whereas the failure plane was located near the base of the bed under sub-saturated conditions. However, all experimental beds were more stable than predicted by 1-D infinite-slope stability models. In partially saturated sand, enhanced stability appears to result from suction stress. Enhanced stability in gravel may result from turbulent energy losses in pores or increased granular friction for failures that are shallow with respect to grain size. These grain-size dependent effects are not currently included in stability models for non-cohesive sediment, and they may help to explain better the timing and location of debris flow occurrence.

  8. Time-dependent crack growth and fracture in concrete

    International Nuclear Information System (INIS)

    Zhou Fan Ping.

    1992-02-01

    The objectives of this thesis are to study time-dependent fracture behaviour in concrete. The thesis consists of an experimental study, costitutive modelling and numerical analysis. The experimental study was undertaken to investigate the influences of time on material properties for the fracture process zone and on crack growth and fracture in plain concrete structures. The experiments include tensile relaxation tests, bending tests on notched beams to determine fracture energy at varying deflection rates, and sustained bending and compact tensile tests. From the tensile relaxation tests, the envelope of the σ-w relation does not seem to be influenced by holding periods, though some local detrimental effect does occur. Fracture energy seems to decrease as rates become slower. In the sustained loading tests, deformation (deflection or CMOD) growth curves display three stages, as usually observed in a creep rupture test. The secondary stage dominates the whole failure lifetime, and the secondary deformation rate appears to have good correlation with the failure lifetime. A crack model for time-dependent fracture is proposed, by applying the idea of the Fictitious Crack Model. In this model, a modified Maxwell model is introduced for the fracture process zone incorporated with the static σ-w curve as a failure criterion, based on the observation of the tensile relaxation tests. The time-dependent σ-w curve is expressed in an incremental law. The proposed model has been implemented in a finite element program and applied to simulating sustained flexural and compact tensile tests. Numerical analysis includes simulations of crack growth, load-CMOD curves, stress-failure lifetime curves, size effects on failure life etc. The numerical results indicate that the model seems to be able to properly predict the main features of time-dependent fracture behaviour in concrete, as compared with the experimental results. 97 refs

  9. Methods for dependency estimation and system unavailability evaluation based on failure data statistics

    International Nuclear Information System (INIS)

    Azarm, M.A.; Hsu, F.; Martinez-Guridi, G.; Vesely, W.E.

    1993-07-01

    This report introduces a new perspective on the basic concept of dependent failures where the definition of dependency is based on clustering in failure times of similar components. This perspective has two significant implications: first, it relaxes the conventional assumption that dependent failures must be simultaneous and result from a severe shock; second, it allows the analyst to use all the failures in a time continuum to estimate the potential for multiple failures in a window of time (e.g., a test interval), therefore arriving at a more accurate value for system unavailability. In addition, the models developed here provide a method for plant-specific analysis of dependency, reflecting the plant-specific maintenance practices that reduce or increase the contribution of dependent failures to system unavailability. The proposed methodology can be used for screening analysis of failure data to estimate the fraction of dependent failures among the failures. In addition, the proposed method can evaluate the impact of the observed dependency on system unavailability and plant risk. The formulations derived in this report have undergone various levels of validations through computer simulation studies and pilot applications. The pilot applications of these methodologies showed that the contribution of dependent failures of diesel generators in one plant was negligible, while in another plant was quite significant. It also showed that in the plant with significant contribution of dependency to Emergency Power System (EPS) unavailability, the contribution changed with time. Similar findings were reported for the Containment Fan Cooler breakers. Drawing such conclusions about system performance would not have been possible with any other reported dependency methodologies

  10. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  11. Color Shift Failure Prediction for Phosphor-Converted White LEDs by Modeling Features of Spectral Power Distribution with a Nonlinear Filter Approach

    Directory of Open Access Journals (Sweden)

    Jiajie Fan

    2017-07-01

    Full Text Available With the expanding application of light-emitting diodes (LEDs, the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD, defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED’s optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1 the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2 the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs, and color rendering indexes (CRIs of phosphor-converted (pc-white LEDs, and also can estimate the residual color life.

  12. Color Shift Failure Prediction for Phosphor-Converted White LEDs by Modeling Features of Spectral Power Distribution with a Nonlinear Filter Approach.

    Science.gov (United States)

    Fan, Jiajie; Mohamed, Moumouni Guero; Qian, Cheng; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael

    2017-07-18

    With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED's optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life.

  13. Application of mobile blood purification system in the treatment of acute renal failure dog model in the field environment

    Directory of Open Access Journals (Sweden)

    Zhi-min ZHANG

    2014-01-01

    Full Text Available Objective To evaluate the stability, safety and efficacy of mobile blood purification system in the treatment of acute renal failure dog model in the field environment. Methods The acute renal failure model was established in 4 dogs by bilateral nephrectomy, which was thereafter treated with the mobile blood purification system. The evaluation of functional index of the mobile blood purification system was performed after a short-time (2 hours and conventional (4 hours dialysis treatment. Results The mobile blood purification system ran stably in the field environment at a blood flow of 150-180ml/min, a dialysate velocity of 2000ml/h, a replacement fluid velocity of 2000ml/h, and ultrafiltration rate of 100-200ml/h. All the functions of alarming system worked well, including static upper limit alarm of ultrafiltration pressure (>100 mmHg, upper limit alarm of ambulatory arterial pressure (>400mmHg, upper limit alarm of ambulatory venous pressure (>400mmHg, bubble alarm of vascular access, bubble alarm during the infusion of solutions, pressure alarm at the substitution pump segment and blood leaking alarm. The vital signs of the 4 dogs with acute renal failure kept stable during the treatment. After the treatment, a remarkable decrease was seen in the levels of serum urea nitrogen, creatinine and serum potassium (P0.05. Conclusions The mobile blood purification system runs normally even in a field environment. It is a flexible and portable device with a great performance in safety and stability in the treatment of acute renal failure. DOI: 10.11855/j.issn.0577-7402.2013.12.15

  14. A joint model for longitudinal and time-to-event data to better assess the specific role of donor and recipient factors on long-term kidney transplantation outcomes.

    Science.gov (United States)

    Fournier, Marie-Cécile; Foucher, Yohann; Blanche, Paul; Buron, Fanny; Giral, Magali; Dantan, Etienne

    2016-05-01

    In renal transplantation, serum creatinine (SCr) is the main biomarker routinely measured to assess patient's health, with chronic increases being strongly associated with long-term graft failure risk (death with a functioning graft or return to dialysis). Joint modeling may be useful to identify the specific role of risk factors on chronic evolution of kidney transplant recipients: some can be related to the SCr evolution, finally leading to graft failure, whereas others can be associated with graft failure without any modification of SCr. Sample data for 2749 patients transplanted between 2000 and 2013 with a functioning kidney at 1-year post-transplantation were obtained from the DIVAT cohort. A shared random effect joint model for longitudinal SCr values and time to graft failure was performed. We show that graft failure risk depended on both the current value and slope of the SCr. Deceased donor graft patient seemed to have a higher SCr increase, similar to patient with diabetes history, while no significant association of these two features with graft failure risk was found. Patient with a second graft was at higher risk of graft failure, independent of changes in SCr values. Anti-HLA immunization was associated with both processes simultaneously. Joint models for repeated and time-to-event data bring new opportunities to improve the epidemiological knowledge of chronic diseases. For instance in renal transplantation, several features should receive additional attention as we demonstrated their correlation with graft failure risk was independent of the SCr evolution.

  15. Effect of eplerenone on serum TNF-α levels in adriamycin induced heart failure male rat models

    International Nuclear Information System (INIS)

    Xuan Nan; Song Liping; Xing Haiyan

    2009-01-01

    Objective: To investigate the effect of eplerenone on serum TNF-α levels in adriamycin induced heart failure male rat models. Methods: Forty male rat models of adriamycin-induced heart failure were prepared with weekly intraperitoneal injection of adriamycin (4/mg/kg) for six weeks. Twenty surviving models were randomly divided into two groups: (1)eplerenone-treated group, n=10, treated with garage of eplerenone 200mg/kg/d for 12 weeks (2) non-treated group n=10. All the surviving models (group (1) n=8, group (2) n=6) were sacrificed after 12 weeks with left ventricular hemodynamic function parameters tested and serum TNF-α levels measured. Ten male rats without adriamycin administration served as controls. Results: Left ventricular hemodynamic parameters in the non-treated group were significantly worse than those in controls (P<0.05). The parameters in the eplerenone treated group were significantly better than those in the non-treated group (P<0.05). The serum TNF-α levels in the non-treated group were significantly higher than those in controls (P<0.05). TNF-α levels in the eplerenone group were significantly lower than those in the non-treated group (P<0.05). Conclusion: Eplerenone could reduce the serum TNF-α levels in the rat models of heart failure. (authors)

  16. Prestudy - Development of trend analysis of component failure

    International Nuclear Information System (INIS)

    Poern, K.

    1995-04-01

    The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs

  17. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Energy Technology Data Exchange (ETDEWEB)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  18. Utility of the Seattle Heart Failure Model in patients with advanced heart failure.

    Science.gov (United States)

    Kalogeropoulos, Andreas P; Georgiopoulou, Vasiliki V; Giamouzis, Grigorios; Smith, Andrew L; Agha, Syed A; Waheed, Sana; Laskar, Sonjoy; Puskas, John; Dunbar, Sandra; Vega, David; Levy, Wayne C; Butler, Javed

    2009-01-27

    The aim of this study was to validate the Seattle Heart Failure Model (SHFM) in patients with advanced heart failure (HF). The SHFM was developed primarily from clinical trial databases and extrapolated the benefit of interventions from published data. We evaluated the discrimination and calibration of SHFM in 445 advanced HF patients (age 52 +/- 12 years, 68.5% male, 52.4% white, ejection fraction 18 +/- 8%) referred for cardiac transplantation. The primary end point was death (n = 92), urgent transplantation (n = 14), or left ventricular assist device (LVAD) implantation (n = 3); a secondary analysis was performed on mortality alone. Patients were receiving optimal therapy (angiotensin-II modulation 92.8%, beta-blockers 91.5%, aldosterone antagonists 46.3%), and 71.0% had an implantable device (defibrillator 30.4%, biventricular pacemaker 3.4%, combined 37.3%). During a median follow-up of 21 months, 109 patients (24.5%) had an event. Although discrimination was adequate (c-statistic >0.7), the SHFM overall underestimated absolute risk (observed vs. predicted event rate: 11.0% vs. 9.2%, 21.0% vs. 16.6%, and 27.9% vs. 22.8% at 1, 2, and 3 years, respectively). Risk underprediction was more prominent in patients with an implantable device. The SHFM had different calibration properties in white versus black patients, leading to net underestimation of absolute risk in blacks. Race-specific recalibration improved the accuracy of predictions. When analysis was restricted to mortality, the SHFM exhibited better performance. In patients with advanced HF, the SHFM offers adequate discrimination, but absolute risk is underestimated, especially in blacks and in patients with devices. This is more prominent when including transplantation and LVAD implantation as an end point.

  19. The failure trace archive : enabling comparative analysis of failures in diverse distributed systems

    NARCIS (Netherlands)

    Kondo, D.; Javadi, B.; Iosup, A.; Epema, D.H.J.

    2010-01-01

    With the increasing functionality and complexity of distributed systems, resource failures are inevitable. While numerous models and algorithms for dealing with failures exist, the lack of public trace data sets and tools has prevented meaningful comparisons. To facilitate the design, validation,

  20. Use of quinine and mortality-risk in patients with heart failure

    DEFF Research Database (Denmark)

    Gjesing, Anne; Gislason, Gunnar H.; Christensen, Stefan B.

    2015-01-01

    included, with 14 510 patients (11%) using quinine at some point. During a median time of follow-up of 989 days (interquartile range 350-2004) 88 878 patients (66%) died. Patients receiving quinine had slightly increased mortality risk, adjusted incidence rate ratio (IRR) 1.04 (95% confidence interval [CI......PURPOSE: Leg cramps are common in patients with heart failure. Quinine is frequently prescribed in low doses to these patients, but safety of this practice is unknown. We studied the outcomes associated with use of quinine in a nationwide cohort of patients with heart failure. METHODS: Through...... individual-level-linkage of Danish national registries, we identified patients discharged from first-time hospitalization for heart failure in 1997-2010. We estimated the risk of mortality associated with quinine treatment by time-dependent Poisson regression models. RESULTS: A total of 135 529 patients were...

  1. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  2. Failure modes and effects criticality analysis and accelerated life testing of LEDs for medical applications

    Science.gov (United States)

    Sawant, M.; Christou, A.

    2012-12-01

    While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, Al

  3. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Yoshitaka; Ohtani, Masanori [Institute of Nuclear Safety System, Inc., Mihama, Fukui (Japan); Fujita, Yushi [TECNOVA Corp., Tokyo (Japan)

    2002-09-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  4. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Ohtani, Masanori; Fujita, Yushi

    2002-01-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  5. ASSESSING THE NON-FINANCIAL PREDICTORS OF THE SUCCESS AND FAILURE OF YOUNG FIRMS IN THE NETHERLANDS

    Directory of Open Access Journals (Sweden)

    Philip VERGAUWEN

    2005-01-01

    Full Text Available In this study, the Lussier (1995 success and failure prediction model is improved and tested on asample of Dutch firms. Besides clearly defining a specific business plan, work experience is added asa variable, and contrary to previous researches, the discrete variables are dealt with appropriate thistime. The results of this improved model show that product/service timing, planning, managementexperience, knowledge of marketing, economic timing, professional advice, and having a businesspartner are predictors of success and failure for young firms in the Netherlands.

  6. DCDS: A Real-time Data Capture and Personalized Decision Support System for Heart Failure Patients in Skilled Nursing Facilities.

    Science.gov (United States)

    Zhu, Wei; Luo, Lingyun; Jain, Tarun; Boxer, Rebecca S; Cui, Licong; Zhang, Guo-Qiang

    2016-01-01

    Heart disease is the leading cause of death in the United States. Heart failure disease management can improve health outcomes for elderly community dwelling patients with heart failure. This paper describes DCDS, a real-time data capture and personalized decision support system for a Randomized Controlled Trial Investigating the Effect of a Heart Failure Disease Management Program (HF-DMP) in Skilled Nursing Facilities (SNF). SNF is a study funded by the NIH National Heart, Lung, and Blood Institute (NHLBI). The HF-DMP involves proactive weekly monitoring, evaluation, and management, following National HF Guidelines. DCDS collects a wide variety of data including 7 elements considered standard of care for patients with heart failure: documentation of left ventricular function, tracking of weight and symptoms, medication titration, discharge instructions, 7 day follow up appointment post SNF discharge and patient education. We present the design and implementation of DCDS and describe our preliminary testing results.

  7. Optimal selective renewal policy for systems subject to propagated failures with global effect and failure isolation phenomena

    International Nuclear Information System (INIS)

    Maaroufi, Ghofrane; Chelbi, Anis; Rezg, Nidhal

    2013-01-01

    This paper considers a selective maintenance policy for multi-component systems for which a minimum level of reliability is required for each mission. Such systems need to be maintained between consecutive missions. The proposed strategy aims at selecting the components to be maintained (renewed) after the completion of each mission such that a required reliability level is warranted up to the next stop with the minimum cost, taking into account the time period allotted for maintenance between missions and the possibility to extend it while paying a penalty cost. This strategy is applied to binary-state systems subject to propagated failures with global effect, and failure isolation phenomena. A set of rules to reduce the solutions space for such complex systems is developed. A numerical example is presented to illustrate the modeling approach and the use of the reduction rules. Finally, the Monte-Carlo simulation is used in combination with the selective maintenance optimization model to deal with a number of successive missions

  8. Workflow interruptions, social stressors from supervisor(s) and attention failure in surgery personnel.

    Science.gov (United States)

    Pereira, Diana; Müller, Patrick; Elfering, Achim

    2015-01-01

    Workflow interruptions and social stressors among surgery personnel may cause attention failure at work that may increase rumination about work issues during leisure time. The test of these assumptions should contribute to the understanding of exhaustion in surgery personnel and patient safety. Workflow interruptions and supervisor-related social stressors were tested to predict attention failure that predicts work-related rumination during leisure time. One hundred ninety-four theatre nurses, anaesthetists and surgeons from a Swiss University hospital participated in a cross-sectional survey. The participation rate was 58%. Structural equation modelling confirmed both indirect paths from workflow interruptions and social stressors via attention failure on rumination (both pworkflow interruptions and social stressors on rumination-could not be empirically supported. Workflow interruptions and social stressors at work are likely to trigger attention failure in surgery personnel. Work redesign and team intervention could help surgery personnel to maintain a high level of quality and patient safety and detach from work related issues to recover during leisure time.

  9. Nonfasting Triglycerides, Low-Density Lipoprotein Cholesterol, and Heart Failure Risk

    DEFF Research Database (Denmark)

    Varbo, Anette; Nordestgaard, Børge G

    2018-01-01

    OBJECTIVE: The prevalence of heart failure is increasing in the aging population, and heart failure is a disease with large morbidity and mortality. There is, therefore, a need for identifying modifiable risk factors for prevention. We tested the hypothesis that high concentrations of nonfasting...... triglycerides and low-density lipoprotein cholesterol are associated with higher risk of heart failure in the general population. APPROACH AND RESULTS: We included 103 860 individuals from the Copenhagen General Population Study and 9694 from the Copenhagen City Heart Study in 2 prospective observational...... association studies. Nonfasting triglycerides and low-density lipoprotein cholesterol were measured at baseline. Individuals were followed for ≤23 years, during which time 3593 were diagnosed with heart failure. Hazard ratios were estimated using Cox proportional hazard regression models. In the Copenhagen...

  10. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  11. Mechanical characterization and modeling of the deformation and failure of the highly crosslinked RTM6 epoxy resin

    Science.gov (United States)

    Morelle, X. P.; Chevalier, J.; Bailly, C.; Pardoen, T.; Lani, F.

    2017-08-01

    The nonlinear deformation and fracture of RTM6 epoxy resin is characterized as a function of strain rate and temperature under various loading conditions involving uniaxial tension, notched tension, uniaxial compression, torsion, and shear. The parameters of the hardening law depend on the strain-rate and temperature. The pressure-dependency and hardening law, as well as four different phenomenological failure criteria, are identified using a subset of the experimental results. Detailed fractography analysis provides insight into the competition between shear yielding and maximum principal stress driven brittle failure. The constitutive model and a stress-triaxiality dependent effective plastic strain based failure criterion are readily introduced in the standard version of Abaqus, without the need for coding user subroutines, and can thus be directly used as an input in multi-scale modeling of fibre-reinforced composite material. The model is successfully validated against data not used for the identification and through the full simulation of the crack propagation process in the V-notched beam shear test.

  12. Sensor failure and multivariable control for airbreathing propulsion systems. Ph.D. Thesis - Dec. 1979 Final Report

    Science.gov (United States)

    Behbehani, K.

    1980-01-01

    A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.

  13. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  14. JACoW Online analysis for anticipated failure diagnostics of the CERN cryogenic systems

    CERN Document Server

    Gayet, Philippe; Bradu, Benjamin; Cirillo, Roberta

    2018-01-01

    The cryogenic system is one of the most critical component of the CERN Large Hadron Collider (LHC) and its associated experiments ATLAS and CMS. In the past years, the cryogenic team has improved the maintenance plan and the operation procedures and achieved a very high reliability. However, as the recovery time after failure remains the major issue for the cryogenic availability new developments must take place. A new online diagnostic tool is developed to identify and anticipate failures of cryogenics field equipment, based on the acquired knowledge on dynamic simulation for the cryogenic equipment and on previous data analytic studies. After having identified the most critical components, we will develop their associated models together with the signature of their failure modes. The proposed tools will detect deviation between the actual systems and their model or identify preliminary failure signatures. This information will allow the operation team to take early mitigating actions before the failure occu...

  15. Large animal model of functional tricuspid regurgitation in pacing induced end-stage heart failure.

    Science.gov (United States)

    Malinowski, Marcin; Proudfoot, Alistair G; Langholz, David; Eberhart, Lenora; Brown, Michael; Schubert, Hans; Wodarek, Jeremy; Timek, Tomasz A

    2017-06-01

    Functional tricuspid regurgitation (FTR) is common in patients with advanced heart failure and frequently complicates left ventricular assist device implantation yet remains poorly understood. We set out to establish large animal model of FTR that could serve as a research platform to investigate the pathogenesis of FTR associated with end-stage heart failure. : Through right thoracotomy, ten adult sheep underwent implantation of pacemaker with epicardial LV lead, five sonomicrometry crystals on the right ventricle, and left and right ventricular telemetry pressure sensors during a beating heart off-pump procedure. After 5 ± 1 days of recovery, baseline haemodynamic, echocardiographic and sonomicrometry data were collected. Animals were paced thereafter at a rate of 220-240 beats/min until the development of heart failure and concomitant tricuspid regurgitation. : Three animals died during early recovery period and one during the pacing phase. Six surviving animals were paced for a mean of 14 ± 5 days. Cardiac function was significantly depressed compared to baseline, with LV ejection fraction falling from 69 ± 2% to 22 ± 4% ( P  tricuspid annulus (from 29.5 ± 1.6 to 36.5 ± 4.5 mm; P  = 0.01) and right ventricle (from 21.9 ± 0.2 to 30.3 ± 0.6 mm; P  = 0.03). Sonomicrometry derived contractility of RV free wall was depressed and at least moderate tricuspid insufficiency developed in all animals. : Biventricular dysfunction, tricuspid annular dilatation and significant FTR were observed in our model of ovine tachycardia induced cardiomyopathy. This animal model reflects the clinical situation of end-stage heart failure patients presenting for mechanical support. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  16. Failure analysis of the cement mantle in total hip arthroplasty with an efficient probabilistic method.

    Science.gov (United States)

    Kaymaz, Irfan; Bayrak, Ozgu; Karsan, Orhan; Celik, Ayhan; Alsaran, Akgun

    2014-04-01

    Accurate prediction of long-term behaviour of cemented hip implants is very important not only for patient comfort but also for elimination of any revision operation due to failure of implants. Therefore, a more realistic computer model was generated and then used for both deterministic and probabilistic analyses of the hip implant in this study. The deterministic failure analysis was carried out for the most common failure states of the cement mantle. On the other hand, most of the design parameters of the cemented hip are inherently uncertain quantities. Therefore, the probabilistic failure analysis was also carried out considering the fatigue failure of the cement mantle since it is the most critical failure state. However, the probabilistic analysis generally requires large amount of time; thus, a response surface method proposed in this study was used to reduce the computation time for the analysis of the cemented hip implant. The results demonstrate that using an efficient probabilistic approach can significantly reduce the computation time for the failure probability of the cement from several hours to minutes. The results also show that even the deterministic failure analyses do not indicate any failure of the cement mantle with high safety factors, the probabilistic analysis predicts the failure probability of the cement mantle as 8%, which must be considered during the evaluation of the success of the cemented hip implants.

  17. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  18. Modification of meander migration by bank failures

    Science.gov (United States)

    Motta, D.; Langendoen, E. J.; Abad, J. D.; García, M. H.

    2014-05-01

    Meander migration and planform evolution depend on the resistance to erosion of the floodplain materials. To date, research to quantify meandering river adjustment has largely focused on resistance to erosion properties that vary horizontally. This paper evaluates the combined effect of horizontal and vertical floodplain material heterogeneity on meander migration by simulating fluvial erosion and cantilever and planar bank mass failure processes responsible for bank retreat. The impact of stream bank failures on meander migration is conceptualized in our RVR Meander model through a bank armoring factor associated with the dynamics of slump blocks produced by cantilever and planar failures. Simulation periods smaller than the time to cutoff are considered, such that all planform complexity is caused by bank erosion processes and floodplain heterogeneity and not by cutoff dynamics. Cantilever failure continuously affects meander migration, because it is primarily controlled by the fluvial erosion at the bank toe. Hence, it impacts migration rates and meander shapes through the horizontal and vertical distribution of erodibility of floodplain materials. Planar failures are more episodic. However, in floodplain areas characterized by less cohesive materials, they can affect meander evolution in a sustained way and produce preferential migration patterns. Model results show that besides the hydrodynamics, bed morphology and horizontal floodplain heterogeneity, floodplain stratigraphy can significantly affect meander evolution, both in terms of migration rates and planform shapes. Specifically, downstream meander migration can either increase or decrease with respect to the case of a homogeneous floodplain; lateral migration generally decreases as result of bank protection due to slump blocks; and the effect on bend skewness depends on the location and volumes of failed bank material caused by cantilever and planar failures along the bends, with possible achievement of

  19. Morning surge of ventricular arrhythmias in a new arrhythmogenic canine model of chronic heart failure is associated with attenuation of time-of-day dependence of heart rate and autonomic adaptation, and reduced cardiac chaos.

    Science.gov (United States)

    Zhu, Yujie; Hanafy, Mohamed A; Killingsworth, Cheryl R; Walcott, Gregory P; Young, Martin E; Pogwizd, Steven M

    2014-01-01

    Patients with chronic heart failure (CHF) exhibit a morning surge in ventricular arrhythmias, but the underlying cause remains unknown. The aim of this study was to determine if heart rate dynamics, autonomic input (assessed by heart rate variability (HRV)) and nonlinear dynamics as well as their abnormal time-of-day-dependent oscillations in a newly developed arrhythmogenic canine heart failure model are associated with a morning surge in ventricular arrhythmias. CHF was induced in dogs by aortic insufficiency & aortic constriction, and assessed by echocardiography. Holter monitoring was performed to study time-of-day-dependent variation in ventricular arrhythmias (PVCs, VT), traditional HRV measures, and nonlinear dynamics (including detrended fluctuations analysis α1 and α2 (DFAα1 & DFAα2), correlation dimension (CD), and Shannon entropy (SE)) at baseline, as well as 240 days (240 d) and 720 days (720 d) following CHF induction. LV fractional shortening was decreased at both 240 d and 720 d. Both PVCs and VT increased with CHF duration and showed a morning rise (2.5-fold & 1.8-fold increase at 6 AM-noon vs midnight-6 AM) during CHF. The morning rise in HR at baseline was significantly attenuated by 52% with development of CHF (at both 240 d & 720 d). Morning rise in the ratio of low frequency to high frequency (LF/HF) HRV at baseline was markedly attenuated with CHF. DFAα1, DFAα2, CD and SE all decreased with CHF by 31, 17, 34 and 7%, respectively. Time-of-day-dependent variations in LF/HF, CD, DFA α1 and SE, observed at baseline, were lost during CHF. Thus in this new arrhythmogenic canine CHF model, attenuated morning HR rise, blunted autonomic oscillation, decreased cardiac chaos and complexity of heart rate, as well as aberrant time-of-day-dependent variations in many of these parameters were associated with a morning surge of ventricular arrhythmias.

  20. Morning surge of ventricular arrhythmias in a new arrhythmogenic canine model of chronic heart failure is associated with attenuation of time-of-day dependence of heart rate and autonomic adaptation, and reduced cardiac chaos.

    Directory of Open Access Journals (Sweden)

    Yujie Zhu

    Full Text Available Patients with chronic heart failure (CHF exhibit a morning surge in ventricular arrhythmias, but the underlying cause remains unknown. The aim of this study was to determine if heart rate dynamics, autonomic input (assessed by heart rate variability (HRV and nonlinear dynamics as well as their abnormal time-of-day-dependent oscillations in a newly developed arrhythmogenic canine heart failure model are associated with a morning surge in ventricular arrhythmias. CHF was induced in dogs by aortic insufficiency & aortic constriction, and assessed by echocardiography. Holter monitoring was performed to study time-of-day-dependent variation in ventricular arrhythmias (PVCs, VT, traditional HRV measures, and nonlinear dynamics (including detrended fluctuations analysis α1 and α2 (DFAα1 & DFAα2, correlation dimension (CD, and Shannon entropy (SE at baseline, as well as 240 days (240 d and 720 days (720 d following CHF induction. LV fractional shortening was decreased at both 240 d and 720 d. Both PVCs and VT increased with CHF duration and showed a morning rise (2.5-fold & 1.8-fold increase at 6 AM-noon vs midnight-6 AM during CHF. The morning rise in HR at baseline was significantly attenuated by 52% with development of CHF (at both 240 d & 720 d. Morning rise in the ratio of low frequency to high frequency (LF/HF HRV at baseline was markedly attenuated with CHF. DFAα1, DFAα2, CD and SE all decreased with CHF by 31, 17, 34 and 7%, respectively. Time-of-day-dependent variations in LF/HF, CD, DFA α1 and SE, observed at baseline, were lost during CHF. Thus in this new arrhythmogenic canine CHF model, attenuated morning HR rise, blunted autonomic oscillation, decreased cardiac chaos and complexity of heart rate, as well as aberrant time-of-day-dependent variations in many of these parameters were associated with a morning surge of ventricular arrhythmias.

  1. Sudden cardiac death and pump failure death prediction in chronic heart failure by combining ECG and clinical markers in an integrated risk model

    Science.gov (United States)

    Orini, Michele; Mincholé, Ana; Monasterio, Violeta; Cygankiewicz, Iwona; Bayés de Luna, Antonio; Martínez, Juan Pablo

    2017-01-01

    Background Sudden cardiac death (SCD) and pump failure death (PFD) are common endpoints in chronic heart failure (CHF) patients, but prevention strategies are different. Currently used tools to specifically predict these endpoints are limited. We developed risk models to specifically assess SCD and PFD risk in CHF by combining ECG markers and clinical variables. Methods The relation of clinical and ECG markers with SCD and PFD risk was assessed in 597 patients enrolled in the MUSIC (MUerte Súbita en Insuficiencia Cardiaca) study. ECG indices included: turbulence slope (TS), reflecting autonomic dysfunction; T-wave alternans (TWA), reflecting ventricular repolarization instability; and T-peak-to-end restitution (ΔαTpe) and T-wave morphology restitution (TMR), both reflecting changes in dispersion of repolarization due to heart rate changes. Standard clinical indices were also included. Results The indices with the greatest SCD prognostic impact were gender, New York Heart Association (NYHA) class, left ventricular ejection fraction, TWA, ΔαTpe and TMR. For PFD, the indices were diabetes, NYHA class, ΔαTpe and TS. Using a model with only clinical variables, the hazard ratios (HRs) for SCD and PFD for patients in the high-risk group (fifth quintile of risk score) with respect to patients in the low-risk group (first and second quintiles of risk score) were both greater than 4. HRs for SCD and PFD increased to 9 and 11 when using a model including only ECG markers, and to 14 and 13, when combining clinical and ECG markers. Conclusion The inclusion of ECG markers capturing complementary pro-arrhythmic and pump failure mechanisms into risk models based only on standard clinical variables substantially improves prediction of SCD and PFD in CHF patients. PMID:29020031

  2. Distributed fault-tolerant time-varying formation control for high-order linear multi-agent systems with actuator failures.

    Science.gov (United States)

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-11-01

    This paper investigates the fault-tolerant time-varying formation control problems for high-order linear multi-agent systems in the presence of actuator failures. Firstly, a fully distributed formation control protocol is presented to compensate for the influences of both bias fault and loss of effectiveness fault. Using the adaptive online updating strategies, no global knowledge about the communication topology is required and the bounds of actuator failures can be unknown. Then an algorithm is proposed to determine the control parameters of the fault-tolerant formation protocol, where the time-varying formation feasible conditions and an approach to expand the feasible formation set are given. Furthermore, the stability of the proposed algorithm is proven based on the Lyapunov-like theory. Finally, two simulation examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Fuel and coolant motions following pin failure: EPIC models and the PBE-5S experiment

    International Nuclear Information System (INIS)

    Garner, P.L.; Abramson, P.B.

    1979-01-01

    The EPIC computer code has been used to analyze the post-fuel-pin-failure behavior in the PBE-5S experiment performed at Sandia Laboratories. The effects of modeling uncertainties on the calculation are examined. The calculations indicate that the majority of the piston motion observed in the test is due to the initial pressurization of the coolant channel by fuel vapor at cladding failure. A more definitive analysis requires improvements in calculational capabilities and experiment diagnostics

  4. On the importance of analyzing flood defense failures

    Directory of Open Access Journals (Sweden)

    Özer Işıl Ece

    2016-01-01

    Full Text Available Flood defense failures are rare events but when they do occur lead to significant amounts of damage. The defenses are usually designed for rather low-frequency hydraulic loading and as such typically at least high enough to prevent overflow. When they fail, flood defenses like levees built with modern design codes usually either fail due to wave overtopping or geotechnical failure mechanisms such as instability or internal erosion. Subsequently geotechnical failures could trigger an overflow leading for the breach to grow in size Not only the conditions relevant for these failure mechanisms are highly uncertain, also the model uncertainty in geomechanical, internal erosion models, or breach models are high compared to other structural models. Hence, there is a need for better validation and calibration of models or, in other words, better insight in model uncertainty. As scale effects typically play an important role and full-scale testing is challenging and costly, historic flood defense failures can be used to provide insights into the real failure processes and conditions. The recently initiated SAFElevee project at Delft University of Technology aims to exploit this source of information by performing back analysis of levee failures at different level of detail. Besides detailed process based analyses, the project aims to investigate spatial and temporal patterns in deformation as a function of the hydrodynamic loading using satellite radar interferometry (i.e. PS-InSAR in order to examine its relation with levee failure mechanisms. The project aims to combine probabilistic approaches with the mechanics of the various relevant failure mechanisms to reduce model uncertainty and propose improvements to assessment and design models. This paper describes the approach of the study to levee breach analysis and the use of satellites for breach initiation analysis, both adopted within the SAFElevee project.

  5. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  6. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    International Nuclear Information System (INIS)

    Dongiovanni, Danilo Nicola; Iesmantas, Tomas

    2016-01-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  7. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    Energy Technology Data Exchange (ETDEWEB)

    Dongiovanni, Danilo Nicola, E-mail: danilo.dongiovanni@enea.it [ENEA, Nuclear Fusion and Safety Technologies Department, via Enrico Fermi 45, Frascati 00040 (Italy); Iesmantas, Tomas [LEI, Breslaujos str. 3 Kaunas (Lithuania)

    2016-11-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  8. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Science.gov (United States)

    Anggraeni, Novia Antika

    2015-04-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.

  9. SOR-ring failure

    International Nuclear Information System (INIS)

    Kitamura, Hideo

    1981-01-01

    It was in the autumn of 1976 that the SOR-ring (synchrotron radiation storage ring) has commenced the regular operation. Since then, the period when the operation was interrupted due to the failures of SOR-ring itself is in total about 8 weeks. Failures and accidents have occurred most in the vacuum system. Those failure experiences are described on the vacuum, electromagnet, radio-frequency acceleration and beam transport systems with their interrupted periods. The eleven failures in the vacuum system have been reported, such as bellows breakage in a heating-evacuating period, leakage from the bellows of straight-through valves (made in U.S.A. and Japan), and leakage from the joint flange of the vacuum system. The longest interruption was 5 weeks due to the failure of a domestically manufactured straight-through valve. The failures of the electromagnet system involve the breakage in a cooling water system, short circuit of a winding in the Q magnet power transformer, blow of a fuse protecting the deflection magnet power source by the current less than the rating, and others. The failures of the RF acceleration system include the breakage of an output electronic tube the breakage of a cavity ceramic, RF voltage fluctuation due to the contact deterioration at a cavity electrode, and the failure of grid bias power source. It is necessary to select the highly reliable components for the vacuum system because the vacuum system failures require longer time for recovery, and very likely to induce secondary and tertiary failures. (Wakatsuki, Y.)

  10. The impact of the time interval on in-vitro fertilisation success after failure of the first attempt.

    Science.gov (United States)

    Bayoglu Tekin, Y; Ceyhan, S T; Kilic, S; Korkmaz, C

    2015-05-01

    The aim of this study was to identify the optimal time interval for in-vitro fertilisation that would increase treatment success after failure of the first attempt. This retrospective study evaluated 454 consecutive cycles of 227 infertile women who had two consecutive attempts within a 6-month period at an IVF centre. Data were collected on duration of stimulation, consumption of gonadotropin, numbers of retrieved oocytes, mature oocytes, fertilised eggs, good quality embryos on day 3/5 following oocyte retrieval and clinical and ongoing pregnancy. There were significant increases in clinical pregnancy rates at 2-, 3- and 4-month intervals. The maximum increase was after two menstrual cycles (p = 0.001). The highest rate of ongoing pregnancy was in women that had the second attempt after the next menstrual cycle following failure of IVF (27.2%). After IVF failure, initiating the next attempt within 2-4 months increases the clinical pregnancy rates.

  11. Inflammation, Self-Regulation, and Health: An Immunologic Model of Self-Regulatory Failure.

    Science.gov (United States)

    Shields, Grant S; Moons, Wesley G; Slavich, George M

    2017-07-01

    Self-regulation is a fundamental human process that refers to multiple complex methods by which individuals pursue goals in the face of distractions. Whereas superior self-regulation predicts better academic achievement, relationship quality, financial and career success, and lifespan health, poor self-regulation increases a person's risk for negative outcomes in each of these domains and can ultimately presage early mortality. Given its centrality to understanding the human condition, a large body of research has examined cognitive, emotional, and behavioral aspects of self-regulation. In contrast, relatively little attention has been paid to specific biologic processes that may underlie self-regulation. We address this latter issue in the present review by examining the growing body of research showing that components of the immune system involved in inflammation can alter neural, cognitive, and motivational processes that lead to impaired self-regulation and poor health. Based on these findings, we propose an integrated, multilevel model that describes how inflammation may cause widespread biobehavioral alterations that promote self-regulatory failure. This immunologic model of self-regulatory failure has implications for understanding how biological and behavioral factors interact to influence self-regulation. The model also suggests new ways of reducing disease risk and enhancing human potential by targeting inflammatory processes that affect self-regulation.

  12. Serviceability Assessment for Cascading Failures in Water Distribution Network under Seismic Scenario

    Directory of Open Access Journals (Sweden)

    Qing Shuang

    2016-01-01

    Full Text Available The stability of water service is a hot point in industrial production, public safety, and academic research. The paper establishes a service evaluation model for the water distribution network (WDN. The serviceability is measured in three aspects: (1 the functionality of structural components under disaster environment; (2 the recognition of cascading failure process; and (3 the calculation of system reliability. The node and edge failures in WDN are interrelated under seismic excitations. The cascading failure process is provided with the balance of water supply and demand. The matrix-based system reliability (MSR method is used to represent the system events and calculate the nonfailure probability. An example is used to illustrate the proposed method. The cascading failure processes with different node failures are simulated. The serviceability is analyzed. The critical node can be identified. The result shows that the aged network has a greater influence on the system service under seismic scenario. The maintenance could improve the antidisaster ability of WDN. Priority should be given to controlling the time between the initial failure and the first secondary failure, for taking postdisaster emergency measures within this time period can largely cut down the spread of cascade effect in the whole WDN.

  13. (m, M) Machining system with two unreliable servers, mixed spares and common-cause failure

    OpenAIRE

    Jain, Madhu; Mittal, Ragini; Kumari, Rekha

    2015-01-01

    This paper deals with multi-component machine repair model having provision of warm standby units and repair facility consisting of two heterogeneous servers (primary and secondary) to provide repair to the failed units. The failure of operating and standby units may occur individually or due to some common cause. The primary server may fail partially following full failure whereas secondary server faces complete failure only. The life times of servers and operating/standby units and their re...

  14. Life time test of a partial model of HTGR helium-helium heat exchanger

    International Nuclear Information System (INIS)

    Kitagawa, Masaki; Hattori, Hiroshi; Ohtomo, Akira; Teramae, Tetsuo; Hamanaka, Junichi; Itoh, Mitsuyoshi; Urabe, Shigemi

    1984-01-01

    Authors had proposed a design guide for the HTGR components and applied it to the design and construction of the 1.5 Mwt helium heat exchanger test loop for the nuclear steel making under the financial support of the Japanese Ministry of International Trade and Industry. In order to assure that the design method covers all the conceivable failure mode and has enough safety margin, a series of life time tests of partial model may be needed. For this project, three types of model tests were performed. A life time test of a partial model of the center manifold pipe and eight heat exchanger tubes were described in this report. A damage criterion with a set of material constants and a simplified method for stress-strain analysis for stub tube under three dimensional load were newly developed and used to predict the lives of each tube. The predicted lives were compared with the experimental lives and good agreement was found between the two. The life time test model was evaluated according to the proposed design guide and it was found that the guide has a safety factor of approximately 200 in life for this particular model. (author)

  15. Modelling of Diffuse Failure and Fluidization in geo materials and Geo structures; Modelizacion de la rotura y fluidificacion en geomateriales y geoestructuras

    Energy Technology Data Exchange (ETDEWEB)

    Pastor, M.

    2013-06-01

    Failure of geo structures is caused by changes in effective stresses induced by external loads (earthquakes, for instance), change in the pore pressures (rain), in the geometry (erosion), or in materials properties (chemical attack, degradation, weathering). Landslides can by analysed as the failure of a geo structure, the slope. There exist many alternative classifications of landslides can be analyzed as the failure of a geo structure, the slope. There exist many alternative classifications of landslides, but we will consider here a simple classification into slides and flows. In the case of slides, the failure consists on the movement of a part of the slope with deformations which concentrate in a narrow zone, the failure surface. This can be idealized as localized failure, and it is typical of over consolidated or dense materials exhibiting softening. On the other hand, flows are made of fluidized materials, flowing in a fluid like manner. This mechanism of failure is known as diffuse failure, and has received much less attention by researchers. Modelling of diffuse failure of slopes is complex, because there appear difficulties in the mathematical, constitutive and numerical models, which have to account for a phase transition. This work deals with modeling, and we will present here some tools recently developed by the author and the group to which he belongs. (Author)

  16. Fracture Failure of Reinforced Concrete Slabs Subjected to Blast Loading Using the Combined Finite-Discrete Element Method

    Directory of Open Access Journals (Sweden)

    Z. M. Jaini

    Full Text Available Abstract Numerical modeling of fracture failure is challenging due to various issues in the constitutive law and the transition of continuum to discrete bodies. Therefore, this study presents the application of the combined finite-discrete element method to investigate the fracture failure of reinforced concrete slabs subjected to blast loading. In numerical modeling, the interaction of non-uniform blast loading on the concrete slab was modeled using the incorporation of the finite element method with a crack rotating approach and the discrete element method to model crack, fracture onset and its post-failures. A time varying pressure-time history based on the mapping method was adopted to define blast loading. The Mohr-Coulomb with Rankine cut-off and von-Mises criteria were applied for concrete and steel reinforcement respectively. The results of scabbing, spalling and fracture show a reliable prediction of damage and fracture.

  17. Updating the FORECAST formative evaluation approach and some implications for ameliorating theory failure, implementation failure, and evaluation failure

    Science.gov (United States)

    Katz, Jason; Wandersman, Abraham; Goodman, Robert M.; Griffin, Sarah; Wilson, Dawn K.; Schillaci, Michael

    2013-01-01

    Historically, there has been considerable variability in how formative evaluation has been conceptualized and practiced. FORmative Evaluation Consultation And Systems Technique (FORECAST) is a formative evaluation approach that develops a set of models and processes that can be used across settings and times, while allowing for local adaptations and innovations. FORECAST integrates specific models and tools to improve limitations in program theory, implementation, and evaluation. In the period since its initial use in a federally funded community prevention project in the early 1990s, evaluators have incorporated important formative evaluation innovations into FORECAST, including the integration of feedback loops and proximal outcome evaluation. In addition, FORECAST has been applied in a randomized community research trial. In this article, we describe updates to FORECAST and the implications of FORECAST for ameliorating failures in program theory, implementation, and evaluation. PMID:23624204

  18. Failure to recall.

    Science.gov (United States)

    Laming, Donald

    2009-01-01

    Mathematical analysis shows that if the pattern of rehearsal in free-recall experiments (of necessity, the pattern observed when participants rehearse aloud) be continued without any further interruption by stimuli (as happens during recall), it terminates with the retrieval of the same 1 word over and over again. Such a terminal state is commonly reached before some of the words in the list have been retrieved even once; those words are not recalled. The 1 minute frequently allowed for recall in free-recall experiments is ample time for retrieval to seize up in this way. The author proposes a model that represents the essential features of the pattern of rehearsal; validates that model by reference to the overt rehearsal data from B. B. Murdock, Jr., and J. Metcalfe (1978) and the recall data from B. B. Murdock, Jr., and R. Okada (1970); demonstrates the long-term properties of continued sequences of retrievals and, also, a fundamental relation linking recall to the total time of presentation; and, finally, compares failure to recall in free-recall experiments with forgetting in general.

  19. Modeling Marrow Failure and MDS for Novel Therapeutics

    Science.gov (United States)

    2017-03-01

    syndrome (MDS) and leukemia is also markedly elevated in patients with inherited marrow failure syndromes compared to age-matched controls. Prognosis of...Novel Therapeutics W81XWH-16-1-0054 1. Introduction Clonal evolution is a potentially life threatening long-term complication of inherited and...The risk of early progression to myelodysplastic syndrome (MDS) and leukemia is also markedly elevated in patients with inherited marrow failure

  20. Clinical risk analysis with failure mode and effect analysis (FMEA) model in a dialysis unit.

    Science.gov (United States)

    Bonfant, Giovanna; Belfanti, Pietro; Paternoster, Giuseppe; Gabrielli, Danila; Gaiter, Alberto M; Manes, Massimo; Molino, Andrea; Pellu, Valentina; Ponzetti, Clemente; Farina, Massimo; Nebiolo, Pier E

    2010-01-01

    The aim of clinical risk management is to improve the quality of care provided by health care organizations and to assure patients' safety. Failure mode and effect analysis (FMEA) is a tool employed for clinical risk reduction. We applied FMEA to chronic hemodialysis outpatients. FMEA steps: (i) process study: we recorded phases and activities. (ii) Hazard analysis: we listed activity-related failure modes and their effects; described control measures; assigned severity, occurrence and detection scores for each failure mode and calculated the risk priority numbers (RPNs) by multiplying the 3 scores. Total RPN is calculated by adding single failure mode RPN. (iii) Planning: we performed a RPNs prioritization on a priority matrix taking into account the 3 scores, and we analyzed failure modes causes, made recommendations and planned new control measures. (iv) Monitoring: after failure mode elimination or reduction, we compared the resulting RPN with the previous one. Our failure modes with the highest RPN came from communication and organization problems. Two tools have been created to ameliorate information flow: "dialysis agenda" software and nursing datasheets. We scheduled nephrological examinations, and we changed both medical and nursing organization. Total RPN value decreased from 892 to 815 (8.6%) after reorganization. Employing FMEA, we worked on a few critical activities, and we reduced patients' clinical risk. A priority matrix also takes into account the weight of the control measures: we believe this evaluation is quick, because of simple priority selection, and that it decreases action times.