WorldWideScience

Sample records for failure time model

  1. Software reliability growth models with normal failure time distributions

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  2. Evolutionary neural network modeling for software cumulative failure time prediction

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  3. Reliability physics and engineering time-to-failure modeling

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  4. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  5. Models and analysis for multivariate failure time data

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the

  6. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  7. Real Time Fire Reconnaissance Satellite Monitoring System Failure Model

    Nino Prieto, Omar Ariosto; Colmenares Guillen, Luis Enrique

    2013-09-01

    In this paper the Real Time Fire Reconnaissance Satellite Monitoring System is presented. This architecture is a legacy of the Detection System for Real-Time Physical Variables which is undergoing a patent process in Mexico. The methodologies for this design are the Structured Analysis for Real Time (SA- RT) [8], and the software is carried out by LACATRE (Langage d'aide à la Conception d'Application multitâche Temps Réel) [9,10] Real Time formal language. The system failures model is analyzed and the proposal is based on the formal language for the design of critical systems and Risk Assessment; AltaRica. This formal architecture uses satellites as input sensors and it was adapted from the original model which is a design pattern for physical variation detection in Real Time. The original design, whose task is to monitor events such as natural disasters and health related applications, or actual sickness monitoring and prevention, as the Real Time Diabetes Monitoring System, among others. Some related work has been presented on the Mexican Space Agency (AEM) Creation and Consultation Forums (2010-2011), and throughout the International Mexican Aerospace Science and Technology Society (SOMECYTA) international congress held in San Luis Potosí, México (2012). This Architecture will allow a Real Time Fire Satellite Monitoring, which will reduce the damage and danger caused by fires which consumes the forests and tropical forests of Mexico. This new proposal, permits having a new system that impacts on disaster prevention, by combining national and international technologies and cooperation for the benefit of humankind.

  8. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  9. rpsftm: An R package for rank preserving structural failure time models

    Allison, A.; White, I. R.; Bond, S.

    2017-01-01

    Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due ...

  10. Development of a subway operation incident delay model using accelerated failure time approaches.

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Time to failure of hierarchical load-transfer models of fracture

    Vázquez-Prada, M; Gómez, J B; Moreno, Y

    1999-01-01

    The time to failure, T, of dynamical models of fracture for a hierarchical load-transfer geometry is studied. Using a probabilistic strategy and juxtaposing hierarchical structures of height n, we devise an exact method to compute T, for structures of height n+1. Bounding T, for large n, we are a...... are able to deduce that the time to failure tends to a nonzero value when n tends to infinity. This numerical conclusion is deduced for both power law and exponential breakdown rules....

  12. Real-time sensor failure detection by dynamic modelling of a PWR plant

    Turkcan, E.; Ciftcioglu, O.

    1992-06-01

    Signal validation and sensor failure detection is an important problem in real-time nuclear power plant (NPP) surveillance. Although conventional sensor redundancy, in a way, is a solution, identification of faulty sensor is necessary for further preventive actions to be taken. A comprehensive solution for the system so that any sensory reading is verified by its model based estimated counterpart, in real-time. Such a realization is accomplished by means of dynamic system's states estimation methodology using Kalman filter modelling technique. The method is investigated by means of real-time data of the steam generator of Borssele nuclear power plant and the method has proved to be satisfactory for real-time sensor failure detection as well as model validation verification. (author). 5 refs.; 6 figs.; 1 tab

  13. A delay time model with imperfect and failure-inducing inspections

    Flage, Roger

    2014-01-01

    This paper presents an inspection-based maintenance optimisation model where the inspections are imperfect and potentially failure-inducing. The model is based on the basic delay-time model in which a system has three states: perfectly functioning, defective and failed. The system is deteriorating through these states and to reveal defective systems, inspections are performed periodically using a procedure by which the system fails with a fixed state-dependent probability; otherwise, an inspection identifies a functioning system as defective (false positive) with a fixed probability and a defective system as functioning (false negative) with a fixed probability. The system is correctively replaced upon failure or preventively replaced either at the N'th inspection time or when an inspection reveals the system as defective, whichever occurs first. Replacement durations are assumed to be negligible and costs are associated with inspections, replacements and failures. The problem is to determine the optimal inspection interval T and preventive age replacement limit N that jointly minimise the long run expected cost per unit of time. The system may also be thought of as a passive two-state system subject to random demands; the three states of the model are then functioning, undetected failed and detected failed; and to ensure the renewal property of replacement cycles the demand process generating the ‘delay time’ is then restricted to the Poisson process. The inspiration for the presented model has been passive safety critical valves as used in (offshore) oil and gas production and transportation systems. In light of this the passive system interpretation is highlighted, as well as the possibility that inspection-induced failures are associated with accidents. Two numerical examples are included, and some potential extensions of the model are indicated

  14. Semiparametric Bayesian analysis of accelerated failure time models with cluster structures.

    Li, Zhaonan; Xu, Xinyi; Shen, Junshan

    2017-11-10

    In this paper, we develop a Bayesian semiparametric accelerated failure time model for survival data with cluster structures. Our model allows distributional heterogeneity across clusters and accommodates their relationships through a density ratio approach. Moreover, a nonparametric mixture of Dirichlet processes prior is placed on the baseline distribution to yield full distributional flexibility. We illustrate through simulations that our model can greatly improve estimation accuracy by effectively pooling information from multiple clusters, while taking into account the heterogeneity in their random error distributions. We also demonstrate the implementation of our method using analysis of Mayo Clinic Trial in Primary Biliary Cirrhosis. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre

    Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip

    2013-04-01

    Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in

  16. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  17. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  18. rpsftm: An R Package for Rank Preserving Structural Failure Time Models.

    Allison, Annabel; White, Ian R; Bond, Simon

    2017-12-04

    Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due to Robins and Tsiatis (1991) and has been developed by White et al. (1997, 1999). The method is randomisation based and uses only the randomised treatment group, observed event times, and treatment history in order to estimate a causal treatment effect. The treatment effect, ψ , is estimated by balancing counter-factual event times (that would be observed if no treatment were received) between treatment groups. G-estimation is used to find the value of ψ such that a test statistic Z ( ψ ) = 0. This is usually the test statistic used in the intention-to-treat analysis, for example, the log rank test statistic. We present an R package that implements the method of rpsftm.

  19. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. The Use of Conditional Probability Integral Transformation Method for Testing Accelerated Failure Time Models

    Abdalla Ahmed Abdel-Ghaly

    2016-06-01

    Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.

  1. The failure of earthquake failure models

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  2. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  3. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  4. A multi-component and multi-failure mode inspection model based on the delay time concept

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  5. Semiparametric accelerated failure time cure rate mixture models with competing risks.

    Choi, Sangbum; Zhu, Liang; Huang, Xuelin

    2018-01-15

    Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non-ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause-specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause-conditional survival function that are combined through a multinomial logistic model within the cure-mixture modeling framework. The cure-mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel-based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel-smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Application of accelerated failure time models for breast cancer patients' survival in Kurdistan Province of Iran.

    Karimi, Asrin; Delpisheh, Ali; Sayehmiri, Kourosh

    2016-01-01

    Breast cancer is the most common cancer and the second common cause of cancer-induced mortalities in Iranian women. There has been a rapid development in hazard models and survival analysis in the last decade. The aim of this study was to evaluate the prognostic factors of overall survival (OS) in breast cancer patients using accelerated failure time models (AFT). This was a retrospective-analytic cohort study. About 313 women with a pathologically proven diagnosis of breast cancer who had been treated during a 7-year period (since January 2006 until March 2014) in Sanandaj City, Kurdistan Province of Iran were recruited. Performance among AFT was assessed using the goodness of fit methods. Discrimination among the exponential, Weibull, generalized gamma, log-logistic, and log-normal distributions was done using Akaik information criteria and maximum likelihood. The 5 years OS was 75% (95% CI = 74.57-75.43). The main results in terms of survival were found for the different categories of the clinical stage covariate, tumor metastasis, and relapse of cancer. Survival time in breast cancer patients without tumor metastasis and relapse were 4, 2-fold longer than other patients with metastasis and relapse, respectively. One of the most important undermining prognostic factors in breast cancer is metastasis; hence, knowledge of the mechanisms of metastasis is necessary to prevent it so occurrence and treatment of metastatic breast cancer and ultimately extend the lifetime of patients.

  7. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  8. Accelerated failure time models for semi-competing risks data in the presence of complex censoring.

    Lee, Kyu Ha; Rondeau, Virginie; Haneuse, Sebastien

    2017-12-01

    Statistical analyses that investigate risk factors for Alzheimer's disease (AD) are often subject to a number of challenges. Some of these challenges arise due to practical considerations regarding data collection such that the observation of AD events is subject to complex censoring including left-truncation and either interval or right-censoring. Additional challenges arise due to the fact that study participants under investigation are often subject to competing forces, most notably death, that may not be independent of AD. Towards resolving the latter, researchers may choose to embed the study of AD within the "semi-competing risks" framework for which the recent statistical literature has seen a number of advances including for the so-called illness-death model. To the best of our knowledge, however, the semi-competing risks literature has not fully considered analyses in contexts with complex censoring, as in studies of AD. This is particularly the case when interest lies with the accelerated failure time (AFT) model, an alternative to the traditional multiplicative Cox model that places emphasis away from the hazard function. In this article, we outline a new Bayesian framework for estimation/inference of an AFT illness-death model for semi-competing risks data subject to complex censoring. An efficient computational algorithm that gives researchers the flexibility to adopt either a fully parametric or a semi-parametric model specification is developed and implemented. The proposed methods are motivated by and illustrated with an analysis of data from the Adult Changes in Thought study, an on-going community-based prospective study of incident AD in western Washington State. © 2017, The International Biometric Society.

  9. Time shift in slope failure prediction between unimodal and bimodal modeling approaches

    Ciervo, Fabio; Casini, Francesca; Nicolina Papa, Maria; Medina, Vicente

    2016-04-01

    Together with the need to use more appropriate mathematical expressions for describing hydro-mechanical soil processes, a challenge issue relates to the need of considering the effects induced by terrain heterogeneities on the physical mechanisms, taking into account the implications of the heterogeneities in affecting time-dependent hydro-mechanical variables, would improve the prediction capacities of models, such as the ones used in early warning systems. The presence of the heterogeneities in partially-saturated slopes results in irregular propagation of the moisture and suction front. To mathematically represent the "dual-implication" generally induced by the heterogeneities in describing the hydraulic terrain behavior, several bimodal hydraulic models have been presented in literature and replaced the conventional sigmoidal/unimodal functions; this presupposes that the scale of the macrostructure is comparable with the local scale (Darcy scale), thus the Richards' model can be assumed adequate to mathematically reproduce the processes. The purpose of this work is to focus on the differences in simulating flow infiltration processes and slope stability conditions originated from preliminary choices of hydraulic models and contextually between different approaches to evaluate the factor of safety (FoS). In particular, the results of two approaches are compared. The first one includes the conventional expression of the FoS under saturated conditions and the widespread used hydraulic model of van Genuchten-Mualem. The second approach includes a generalized FoS equation for infinite-slope model under variably saturated soil conditions (Lu and Godt, 2008) and the bimodal Romano et al.'s (2011) functions to describe the hydraulic response. The extension of the above mentioned approach to the bimodal context is based on an analytical method to assess the effects of the hydraulic properties on soil shear developed integrating a bimodal lognormal hydraulic function

  10. Modeling Epidemic Network Failures

    Ruepp, Sarah Renée; Fagertun, Anna Manolova

    2013-01-01

    This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...... the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used...... to evaluate multiple epidemic scenarios in various network types....

  11. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  12. Ductile failure modeling

    Benzerga, Ahmed Amine; Leblond, Jean Baptiste; Needleman, Alan

    2016-01-01

    Ductile fracture of structural metals occurs mainly by the nucleation, growth and coalescence of voids. Here an overview of continuum models for this type of failure is given. The most widely used current framework is described and its limitations discussed. Much work has focused on extending void...... growth models to account for non-spherical initial void shapes and for shape changes during growth. This includes cases of very low stress triaxiality, where the voids can close up to micro-cracks during the failure process. The void growth models have also been extended to consider the effect of plastic...... anisotropy, or the influence of nonlocal effects that bring a material size scale into the models. Often the voids are not present in the material from the beginning, and realistic nucleation models are important. The final failure process by coalescence of neighboring voids is an issue that has been given...

  13. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  14. Characterization and modeling of SET/RESET cycling induced read-disturb failure time degradation in a resistive switching memory

    Su, Po-Cheng; Hsu, Chun-Chi; Du, Sin-I.; Wang, Tahui

    2017-12-01

    Read operation induced disturbance in SET-state in a tungsten oxide resistive switching memory is investigated. We observe that the reduction of oxygen vacancy density during read-disturb follows power-law dependence on cumulative read-disturb time. Our study shows that the SET-state read-disturb immunity progressively degrades by orders of magnitude as SET/RESET cycle number increases. To explore the cause of the read-disturb degradation, we perform a constant voltage stress to emulate high-field stress effects in SET/RESET cycling. We find that the read-disturb failure time degradation is attributed to high-field stress-generated oxide traps. Since the stress-generated traps may substitute for some of oxygen vacancies in forming conductive percolation paths in a switching dielectric, a stressed cell has a reduced oxygen vacancy density in SET-state, which in turn results in a shorter read-disturb failure time. We develop an analytical read-disturb degradation model including both cycling induced oxide trap creation and read-disturb induced oxygen vacancy reduction. Our model can well reproduce the measured read-disturb failure time degradation in a cycled cell without using fitting parameters.

  15. The Influence of Temperature on Time-Dependent Deformation and Failure in Granite: A Mesoscale Modeling Approach

    Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick

    2017-09-01

    An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the

  16. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  17. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  18. Failure analysis of real-time systems

    Jalashgar, A.; Stoelen, K.

    1998-01-01

    This paper highlights essential aspects of real-time software systems that are strongly related to the failures and their course of propagation. The significant influence of means-oriented and goal-oriented system views in the description, understanding and analysing of those aspects is elaborated. The importance of performing failure analysis prior to reliability analysis of real-time systems is equally addressed. Problems of software reliability growth models taking the properties of such systems into account are discussed. Finally, the paper presents a preliminary study of a goal-oriented approach to model the static and dynamic characteristics of real-time systems, so that the corresponding analysis can be based on a more descriptive and informative picture of failures, their effects and the possibility of their occurrence. (author)

  19. The comparison of proportional hazards and accelerated failure time models in analyzing the first birth interval survival data

    Faruk, Alfensi

    2018-03-01

    Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.

  20. Modeling and real time simulation of an HVDC inverter feeding a weak AC system based on commutation failure study.

    Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad

    2018-06-01

    This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Damage-Based Time-Dependent Modeling of Paraglacial to Postglacial Progressive Failure of Large Rock Slopes

    Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.

    2018-01-01

    Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.

  2. Development of container failure models

    Garisto, N.C.

    1990-01-01

    In order to produce a complete performance assessment for a Canadian waste vault some prediction of container failure times is required. Data are limited; however, the effects of various possible failure scenarios on the rest of the vault model can be tested. For titanium and copper, the two materials considered in the Canadian program, data are available on the frequency of failures due to manufacturing defects; there is also an estimate on the expected size of such defects. It can be shown that the consequences of such small defects in terms of the dose to humans are acceptable. It is not clear, from a modelling point of view, whether titanium or copper are preferable

  3. A Validation Study of the Rank-Preserving Structural Failure Time Model: Confidence Intervals and Unique, Multiple, and Erroneous Solutions.

    Ouwens, Mario; Hauch, Ole; Franzén, Stefan

    2018-05-01

    The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.

  4. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  5. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  6. Uncertainties in container failure time predictions

    Williford, R.E.

    1990-01-01

    Stochastic variations in the local chemical environment of a geologic waste repository can cause corresponding variations in container corrosion rates and failure times, and thus in radionuclide release rates. This paper addresses how well the future variations in repository chemistries must be known in order to predict container failure times that are bounded by a finite time period within the repository lifetime. Preliminary results indicate that a 5000 year scatter in predicted container failure times requires that repository chemistries be known to within ±10% over the repository lifetime. These are small uncertainties compared to current estimates. 9 refs., 3 figs

  7. An analytical model for interactive failures

    Sun Yong; Ma Lin; Mathew, Joseph; Zhang Sheng

    2006-01-01

    In some systems, failures of certain components can interact with each other, and accelerate the failure rates of these components. These failures are defined as interactive failure. Interactive failure is a prevalent cause of failure associated with complex systems, particularly in mechanical systems. The failure risk of an asset will be underestimated if the interactive effect is ignored. When failure risk is assessed, interactive failures of an asset need to be considered. However, the literature is silent on previous research work in this field. This paper introduces the concepts of interactive failure, develops an analytical model to analyse this type of failure quantitatively, and verifies the model using case studies and experiments

  8. Continuous-Time Semi-Markov Models in Health Economic Decision Making : An Illustrative Example in Heart Failure Disease Management

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease

  9. A Prognostic Model for Estimating the Time to Virologic Failure in HIV-1 Infected Patients Undergoing a New Combination Antiretroviral Therapy Regimen

    Micheli Valeria

    2011-06-01

    Full Text Available Abstract Background HIV-1 genotypic susceptibility scores (GSSs were proven to be significant prognostic factors of fixed time-point virologic outcomes after combination antiretroviral therapy (cART switch/initiation. However, their relative-hazard for the time to virologic failure has not been thoroughly investigated, and an expert system that is able to predict how long a new cART regimen will remain effective has never been designed. Methods We analyzed patients of the Italian ARCA cohort starting a new cART from 1999 onwards either after virologic failure or as treatment-naïve. The time to virologic failure was the endpoint, from the 90th day after treatment start, defined as the first HIV-1 RNA > 400 copies/ml, censoring at last available HIV-1 RNA before treatment discontinuation. We assessed the relative hazard/importance of GSSs according to distinct interpretation systems (Rega, ANRS and HIVdb and other covariates by means of Cox regression and random survival forests (RSF. Prediction models were validated via the bootstrap and c-index measure. Results The dataset included 2337 regimens from 2182 patients, of which 733 were previously treatment-naïve. We observed 1067 virologic failures over 2820 persons-years. Multivariable analysis revealed that low GSSs of cART were independently associated with the hazard of a virologic failure, along with several other covariates. Evaluation of predictive performance yielded a modest ability of the Cox regression to predict the virologic endpoint (c-index≈0.70, while RSF showed a better performance (c-index≈0.73, p Conclusions GSSs of cART and several other covariates were investigated using linear and non-linear survival analysis. RSF models are a promising approach for the development of a reliable system that predicts time to virologic failure better than Cox regression. Such models might represent a significant improvement over the current methods for monitoring and optimization of cART.

  10. Failure probabilistic model of CNC lathes

    Wang Yiqiang; Jia Yazhou; Yu Junyi; Zheng Yuhua; Yi Shangfeng

    1999-01-01

    A field failure analysis of computerized numerical control (CNC) lathes is described. Field failure data was collected over a period of two years on approximately 80 CNC lathes. A coding system to code failure data was devised and a failure analysis data bank of CNC lathes was established. The failure position and subsystem, failure mode and cause were analyzed to indicate the weak subsystem of a CNC lathe. Also, failure probabilistic model of CNC lathes was analyzed by fuzzy multicriteria comprehensive evaluation

  11. Computational Models of Rock Failure

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  12. The Statistical Analysis of Failure Time Data

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  13. Success and failure of dead-time models as applied to hybrid pixel detectors in high-flux applications

    Sobott, B. A.; Broennimann, Ch.; Schmitt, B.; Trueb, P.; Schneebeli, M.; Lee, V.; Peake, D. J.; Elbracht-Leong, S.; Schubert, A.; Kirby, N.; Boland, M. J.; Chantler, C. T.; Barnea, Z.; Rassool, R. P.

    2013-01-01

    Detector response functionals are found to have useful but also limited application to synchrotron studies where bunched fills are becoming common. By matching the detector response function to the source temporal structure, substantial improvements in efficiency, count rate and linearity are possible. The performance of a single-photon-counting hybrid pixel detector has been investigated at the Australian Synchrotron. Results are compared with the body of accepted analytical models previously validated with other detectors. Detector functionals are valuable for empirical calibration. It is shown that the matching of the detector dead-time with the temporal synchrotron source structure leads to substantial improvements in count rate and linearity of response. Standard implementations are linear up to ∼0.36 MHz pixel −1 ; the optimized linearity in this configuration has an extended range up to ∼0.71 MHz pixel −1 ; these are further correctable with a transfer function to ∼1.77 MHz pixel −1 . This new approach has wide application both in high-accuracy fundamental experiments and in standard crystallographic X-ray fluorescence and other X-ray measurements. The explicit use of data variance (rather than N 1/2 noise) and direct measures of goodness-of-fit (χ r 2 ) are introduced, raising issues not encountered in previous literature for any detector, and suggesting that these inadequacies of models may apply to most detector types. Specifically, parametrization of models with non-physical values can lead to remarkable agreement for a range of count-rate, pulse-frequency and temporal structure. However, especially when the dead-time is near resonant with the temporal structure, limitations of these classical models become apparent. Further, a lack of agreement at extreme count rates was evident

  14. Timing analysis of PWR fuel pin failures

    Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J.; Straka, M.

    1992-09-01

    Research has been conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B ampersand W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin bumup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PFL/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B ampersand W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B ampersand W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design bumup. Using peaking factors commensurate widi actual bumups would result in longer intervals for both reactor designs. This document also contains appendices A through J of this report

  15. Routine maintenance prolongs ESP time between failures

    Hurst, T.; Lannom, R.W.; Divine, D.L.

    1992-01-01

    This paper reports that routine maintenance of electric submersible motors (ESPs) significantly lengthened the mean time between motor failures (MTBF), decreased operating costs, and extended motor run life in the Sacroc Unit of the Kelly-Snyder field in West Texas. After the oil price boom of the early 1980s. rapidly eroding profit margins from producing properties caused a much stronger focus on reducing operating costs. In Sacroc, ESP operating life and repair costs became a major target of cost reduction efforts. The routine ESP maintenance program has been in place for over 3 years

  16. Generic Sensor Failure Modeling for Cooperative Systems

    Jäger, Georg; Zug, Sebastian

    2018-01-01

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435

  17. Resolving epidemic network failures through differentiated repair times

    Fagertun, Anna Manolova; Ruepp, Sarah Renée; Manzano, Marc

    2015-01-01

    In this study, the authors investigate epidemic failure spreading in large-scale transport networks under generalisedmulti-protocol label switching control plane. By evaluating the effect of the epidemic failure spreading on the network,they design several strategies for cost-effective network pe...... assigninglower repair times among the network nodes. They believe that the event-driven simulation model can be highly beneficialfor network providers, since it could be used during the network planning process for facilitating cost-effective networksurvivability design.......In this study, the authors investigate epidemic failure spreading in large-scale transport networks under generalisedmulti-protocol label switching control plane. By evaluating the effect of the epidemic failure spreading on the network,they design several strategies for cost-effective network...... performance improvement via differentiated repair times. First, theyidentify the most vulnerable and the most strategic nodes in the network. Then, via extensive event-driven simulations theyshow that strategic placement of resources for improved failure recovery has better performance than randomly...

  18. Weibull Parameters Estimation Based on Physics of Failure Model

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  19. A bivariate model for analyzing recurrent multi-type automobile failures

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by

  20. Ductile shear failure or plug failure of spot welds modelled by modified Gurson model

    Nielsen, Kim Lau; Tvergaard, Viggo

    2010-01-01

    For resistance spot welded shear-lab specimens, interfacial failure under ductile shearing or ductile plug failure are analyzed numerically, using a shear modified Gurson model. The interfacial shear failure occurs under very low stress triaxiality, where the original Gurson model would predict...

  1. On a Stochastic Failure Model under Random Shocks

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  2. A Markov Model for Commen-Cause Failures

    Platz, Ole

    1984-01-01

    A continuous time four-state Markov chain is shown to cover several of the models that have been used for describing dependencies between failures of components in redundant systems. Among these are the models derived by Marshall and Olkin and by Freund and models for one-out-of-three and two...

  3. Stochastic failure modelling of unidirectional composite ply failure

    Whiteside, M.B.; Pinho, S.T.; Robinson, P.

    2012-01-01

    Stochastic failure envelopes are generated through parallelised Monte Carlo Simulation of a physically based failure criteria for unidirectional carbon fibre/epoxy matrix composite plies. Two examples are presented to demonstrate the consequence on failure prediction of both statistical interaction of failure modes and uncertainty in global misalignment. Global variance-based Sobol sensitivity indices are computed to decompose the observed variance within the stochastic failure envelopes into contributions from physical input parameters. The paper highlights a selection of the potential advantages stochastic methodologies offer over the traditional deterministic approach.

  4. Integrated Logistics Support Analysis of the International Space Station Alpha, Background and Summary of Mathematical Modeling and Failure Density Distributions Pertaining to Maintenance Time Dependent Parameters

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.

  5. Accelerated failure time regression for backward recurrence times and current durations

    Keiding, N; Fine, J P; Hansen, O H

    2011-01-01

    Backward recurrence times in stationary renewal processes and current durations in dynamic populations observed at a cross-section may yield estimates of underlying interarrival times or survival distributions under suitable stationarity assumptions. Regression models have been proposed for these......Backward recurrence times in stationary renewal processes and current durations in dynamic populations observed at a cross-section may yield estimates of underlying interarrival times or survival distributions under suitable stationarity assumptions. Regression models have been proposed...... for these situations, but accelerated failure time models have the particularly attractive feature that they are preserved when going from the backward recurrence times to the underlying survival distribution of interest. This simple fact has recently been noticed in a sociological context and is here illustrated...... by a study of current duration of time to pregnancy...

  6. An interval-valued reliability model with bounded failure rates

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  7. Analysis of terminated TOP accidents in the FTR using the Los Alamos failure model

    Mast, P.K.; Scott, J.H.

    1978-01-01

    A new fuel pin failure model (the Los Alamos Failure Model), based on a linear life fraction rule failure criterion, has been developed and is reported herein. Excellent agreement between calculated and observed failure time and location has been obtained for a number of TOP TREAT tests. Because of the nature of the failure criterion used, the code has also been used to investigate the extent of cladding damage incurred in terminated as well as unterminated TOP transients in the FTR

  8. Failure diagnosis using discrete event models

    Sampath, M.; Sengupta, R.; Lafortune, S.; Teneketzis, D.; Sinnamohideen, K.

    1994-01-01

    We propose a Discrete Event Systems (DES) approach to the failure diagnosis problem. We present a methodology for modeling physical systems in a DES framework. We discuss the notion of diagnosability and present the construction procedure of the diagnoser. Finally, we illustrate our approach using a Heating, Ventilation and Air Conditioning (HVAC) system

  9. Cladding failure probability modeling for risk evaluations of fast reactors

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current US innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery

  10. Cladding failure probability modeling for risk evaluations of fast reactors

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current U.S. innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery. (orig.)

  11. A real-time expert system for nuclear power plant failure diagnosis and operational guide

    Naito, N.; Sakuma, A.; Shigeno, K.; Mori, N.

    1987-01-01

    A real-time expert system (DIAREX) has been developed to diagnose plant failure and to offer a corrective operational guide for boiling water reactor (BWR) power plants. The failure diagnosis model used in DIAREX was systematically developed, based mainly on deep knowledge, to cover heuristics. Complex paradigms for knowledge representation were adopted, i.e., the process representation language and the failure propagation tree. The system is composed of a knowledge base, knowledge base editor, preprocessor, diagnosis processor, and display processor. The DIAREX simulation test has been carried out for many transient scenarios, including multiple failures, using a real-time full-scope simulator modeled after the 1100-MW(electric) BWR power plant. Test results showed that DIAREX was capable of diagnosing a plant failure quickly and of providing a corrective operational guide with a response time fast enough to offer valuable information to plant operators

  12. Experimental models of hepatotoxicity related to acute liver failure

    Maes, Michaël [Department of In Vitro Toxicology and Dermato-Cosmetology, Vrije Universiteit Brussel, Brussels (Belgium); Vinken, Mathieu, E-mail: mvinken@vub.ac.be [Department of In Vitro Toxicology and Dermato-Cosmetology, Vrije Universiteit Brussel, Brussels (Belgium); Jaeschke, Hartmut [Department of Pharmacology, Toxicology and Therapeutics, University of Kansas Medical Center, Kansas City (United States)

    2016-01-01

    Acute liver failure can be the consequence of various etiologies, with most cases arising from drug-induced hepatotoxicity in Western countries. Despite advances in this field, the management of acute liver failure continues to be one of the most challenging problems in clinical medicine. The availability of adequate experimental models is of crucial importance to provide a better understanding of this condition and to allow identification of novel drug targets, testing the efficacy of new therapeutic interventions and acting as models for assessing mechanisms of toxicity. Experimental models of hepatotoxicity related to acute liver failure rely on surgical procedures, chemical exposure or viral infection. Each of these models has a number of strengths and weaknesses. This paper specifically reviews commonly used chemical in vivo and in vitro models of hepatotoxicity associated with acute liver failure. - Highlights: • The murine APAP model is very close to what is observed in patients. • The Gal/ET model is useful to study TNFα-mediated apoptotic signaling mechanisms. • Fas receptor activation is an effective model of apoptosis and secondary necrosis. • The ConA model is a relevant model of auto-immune hepatitis and viral hepatitis. • Multiple time point evaluation needed in experimental models of acute liver injury.

  13. Predicting water main failures using Bayesian model averaging and survival modelling approach

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  14. Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression

    Yingchong Wang

    2015-01-01

    Full Text Available Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating, secondary (steady state creep regime, and tertiary creep (accelerating creep stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0 for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage.

  15. Failure modes and natural control time for distributed vibrating systems

    Reid, R.M.

    1994-01-01

    The eigenstructure of the Gram matrix of frequency exponentials is used to study linear vibrating systems of hyperbolic type with distributed control. Using control norm as a practical measure of controllability and the vibrating string as a prototype, it is demonstrated that hyperbolic systems have a natural control time, even when only finitely many modes are excited. For shorter control times there are identifiable control failure modes which can be steered to zero only with very high cost in control norm. Both natural control time and the associated failure modes are constructed for linear fluids, strings, and beams, making note of the essential algorithms and Mathematica code, and displaying results graphically

  16. Modelling the failure modes in geobag revetments.

    Akter, A; Crapper, M; Pender, G; Wright, G; Wong, W S

    2012-01-01

    In recent years, sand filled geotextile bags (geobags) have been used as a means of long-term riverbank revetment stabilization. However, despite their deployment in a significant number of locations, the failure modes of such structures are not well understood. Three interactions influence the geobag performance, i.e. geobag-geobag, geobag-water flow and geobag-water flow-river bank. The aim of the research reported here is to develop a detailed understanding of the failure mechanisms in a geobag revetment using a discrete element model (DEM) validated by laboratory data. The laboratory measured velocity data were used for preparing a mapped velocity field for a coupled DEM simulation of geobag revetment failure. The validated DEM model could identify well the critical bag location in varying water depths. Toe scour, one of the major instability factors in revetments, and its influence on the bottom-most layer of the bags were also reasonably represented in this DEM model. It is envisaged that the use of a DEM model will provide more details on geobag revetment performance in riverbanks.

  17. MATHEMATICAL MODEL OF WEAR CHARACTER FAILURE IN AIRCRAFT OPERATION

    Радько, Олег Віталійович; Молдован, Володимир Дмитрович

    2016-01-01

    In this paper the mathematical model of failures associated with wear during aircraft exploitationis developed. Тhe calculations of the distribution function, distribution density and failurerate gamma distribution at low coefficients of variation and the relatively low value of averagewear rate for the current time, which varies quite widely. The results coincide well with thephysical concepts and can be used to build different models of aircraft. Gamma distribution is apretty good model for...

  18. A quasi-independence model to estimate failure rates

    Colombo, A.G.

    1988-01-01

    The use of a quasi-independence model to estimate failure rates is investigated. Gate valves of nuclear plants are considered, and two qualitative covariates are taken into account: plant location and reactor system. Independence between the two covariates and an exponential failure model are assumed. The failure rate of the components of a given system and plant is assumed to be a constant, but it may vary from one system to another and from one plant to another. This leads to the analysis of a contingency table. A particular feature of the model is the different operating time of the components in the various cells which can also be equal to zero. The concept of independence of the covariates is then replaced by that of quasi-independence. The latter definition, however, is used in a broader sense than usual. Suitable statistical tests are discussed and a numerical example illustrates the use of the method. (author)

  19. Reliability modelling for wear out failure period of a single unit system

    Arekar, Kirti; Ailawadi, Satish; Jain, Rinku

    2012-01-01

    The present paper deals with two time-shifted density models for wear out failure period of a single unit system. The study, considered the time-shifted Gamma and Normal distributions. Wear out failures occur as a result of deterioration processes or mechanical wear and its probability of occurrence increases with time. A failure rate as a function of time deceases in an early failure period and it increases in wear out period. Failure rates for time shifted distributions and expression for m...

  20. The hot (invisible? hand: can time sequence patterns of success/failure in sports be modeled as repeated random independent trials?

    Gur Yaari

    Full Text Available The long lasting debate initiated by Gilovich, Vallone and Tversky in [Formula: see text] is revisited: does a "hot hand" phenomenon exist in sports? Hereby we come back to one of the cases analyzed by the original study, but with a much larger data set: all free throws taken during five regular seasons ([Formula: see text] of the National Basketball Association (NBA. Evidence supporting the existence of the "hot hand" phenomenon is provided. However, while statistical traces of this phenomenon are observed in the data, an open question still remains: are these non random patterns a result of "success breeds success" and "failure breeds failure" mechanisms or simply "better" and "worse" periods? Although free throws data is not adequate to answer this question in a definite way, we speculate based on it, that the latter is the dominant cause behind the appearance of the "hot hand" phenomenon in the data.

  1. The hot (invisible?) hand: can time sequence patterns of success/failure in sports be modeled as repeated random independent trials?

    Yaari, Gur; Eisenmann, Shmuel

    2011-01-01

    The long lasting debate initiated by Gilovich, Vallone and Tversky in [Formula: see text] is revisited: does a "hot hand" phenomenon exist in sports? Hereby we come back to one of the cases analyzed by the original study, but with a much larger data set: all free throws taken during five regular seasons ([Formula: see text]) of the National Basketball Association (NBA). Evidence supporting the existence of the "hot hand" phenomenon is provided. However, while statistical traces of this phenomenon are observed in the data, an open question still remains: are these non random patterns a result of "success breeds success" and "failure breeds failure" mechanisms or simply "better" and "worse" periods? Although free throws data is not adequate to answer this question in a definite way, we speculate based on it, that the latter is the dominant cause behind the appearance of the "hot hand" phenomenon in the data.

  2. Modeling discrete time-to-event data

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  3. Prediction of dynamic expected time to system failure

    Oh, Deog Yeon; Lee, Chong Chul [Korea Nuclear Fuel Co., Ltd., Taejon (Korea, Republic of)

    1997-12-31

    The mean time to failure (MTTF) expressing the mean value of the system life is a measure of system effectiveness. To estimate the remaining life of component and/or system, the dynamic mean time to failure concept is suggested. It is the time-dependent property depending on the status of components. The Kalman filter is used to estimate the reliability of components using the on-line information (directly measured sensor output or device-specific diagnostics in the intelligent sensor) in form of the numerical value (state factor). This factor considers the persistency of the fault condition and confidence level in measurement. If there is a complex system with many components, each calculated reliability`s of components are combined, which results in the dynamic MTTF of system. The illustrative examples are discussed. The results show that the dynamic MTTF can well express the component and system failure behaviour whether any kinds of failure are occurred or not. 9 refs., 6 figs. (Author)

  4. Prediction of dynamic expected time to system failure

    Oh, Deog Yeon; Lee, Chong Chul [Korea Nuclear Fuel Co., Ltd., Taejon (Korea, Republic of)

    1998-12-31

    The mean time to failure (MTTF) expressing the mean value of the system life is a measure of system effectiveness. To estimate the remaining life of component and/or system, the dynamic mean time to failure concept is suggested. It is the time-dependent property depending on the status of components. The Kalman filter is used to estimate the reliability of components using the on-line information (directly measured sensor output or device-specific diagnostics in the intelligent sensor) in form of the numerical value (state factor). This factor considers the persistency of the fault condition and confidence level in measurement. If there is a complex system with many components, each calculated reliability`s of components are combined, which results in the dynamic MTTF of system. The illustrative examples are discussed. The results show that the dynamic MTTF can well express the component and system failure behaviour whether any kinds of failure are occurred or not. 9 refs., 6 figs. (Author)

  5. Study on real-time elevator brake failure predictive system

    Guo, Jun; Fan, Jinwei

    2013-10-01

    This paper presented a real-time failure predictive system of the elevator brake. Through inspecting the running state of the coil by a high precision long range laser triangulation non-contact measurement sensor, the displacement curve of the coil is gathered without interfering the original system. By analyzing the displacement data using the diagnostic algorithm, the hidden danger of the brake system can be discovered in time and thus avoid the according accident.

  6. A simple approach to modeling ductile failure.

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  7. Hydraulic mechanism and time-dependent characteristics of loose gully deposits failure induced by rainfall

    Yong Wu

    2015-12-01

    Full Text Available Failure of loose gully deposits under the effect of rainfall contributes to the potential risk of debris flow. In the past decades, researches on hydraulic mechanism and time-dependent characteristics of loose deposits failure are frequently reported, however adequate measures for reducing debris flow are not available practically. In this context, a time-dependent model was established to determine the changes of water table of loose deposits using hydraulic and topographic theories. In addition, the variation in water table with elapsed time was analyzed. The formulas for calculating hydrodynamic and hydrostatic pressures on each strip and block unit of deposit were proposed, and the slope stability and failure risk of the loose deposits were assessed based on the time-dependent hydraulic characteristics of established model. Finally, the failure mechanism of deposits based on infinite slope theory was illustrated, with an example, to calculate sliding force, anti-sliding force and residual sliding force applied to each slice. The results indicate that failure of gully deposits under the effect of rainfall is the result of continuously increasing hydraulic pressure and water table. The time-dependent characteristics of loose deposit failure are determined by the factors of hydraulic properties, drainage area of interest, rainfall pattern, rainfall duration and intensity.

  8. On rate-state and Coulomb failure models

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified

  9. Calculation of fuel pin failure timing under LOCA conditions

    Jones, K.R.; Wade, N.L.; Siefken, L.J.; Straka, M.; Katsma, K.R.

    1991-10-01

    The objective of this research was to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B ampersand W) design (Oconee) and a Westinghouse (W) 4-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin burnup, axial peaking factor, break size, emergency core cooling system (ECCS) availability, and main coolant pump trip on these items. The analysis was performed using a four-code approach, comprised of FRAPCON-2, SCDAP/RELAP5/MOD3, TRAC-PF1/MOD1, and FRAP-T6. In addition to the calculation of timing results, this analysis provided a comparison of the capabilities of SCDAP/RELAP5/MOD3 with TRAC-PF1/MOD1 for large-break LOCA analysis. This paper discusses the methodology employed and the code development efforts required to implement the methodology. The shortest time intervals calculated between initiation of containment isolation and fuel pin failure were 11.4 s and 19.1 for the B ampersand W and W plants, respectively. The FRAP-T6 fuel pin failure times calculated using thermal-hydraulic data generated by SCDAP/RELAP5/MOD3 were more conservative than those calculated using data generated by TRAC-PF1/MOD1. 18 refs., 7 figs., 4 tabs

  10. MODELS OF INSULIN RESISTANCE AND HEART FAILURE

    Velez, Mauricio; Kohli, Smita; Sabbah, Hani N.

    2013-01-01

    The incidence of heart failure (HF) and diabetes mellitus is rapidly increasing and is associated with poor prognosis. In spite of the advances in therapy, HF remains a major health problem with high morbidity and mortality. When HF and diabetes coexist, clinical outcomes are significantly worse. The relationship between these two conditions has been studied in various experimental models. However, the mechanisms for this interrelationship are complex, incompletely understood, and have become a matter of considerable clinical and research interest. There are only few animal models that manifest both HF and diabetes. However, the translation of results from these models to human disease is limited and new models are needed to expand our current understanding of this clinical interaction. In this review, we discuss mechanisms of insulin signaling and insulin resistance, the clinical association between insulin resistance and HF and its proposed pathophysiologic mechanisms. Finally, we discuss available animal models of insulin resistance and HF and propose requirements for future new models. PMID:23456447

  11. Predicting kidney graft failure using time-dependent renal function covariates

    de Bruijne, Mattheus H. J.; Sijpkens, Yvo W. J.; Paul, Leendert C.; Westendorp, Rudi G. J.; van Houwelingen, Hans C.; Zwinderman, Aeilko H.

    2003-01-01

    Chronic rejection and recurrent disease are the major causes of late graft failure in renal transplantation. To assess outcome, most researchers use Cox proportional hazard analysis with time-fixed covariates. We developed a model adding time-dependent renal function covariates to improve the

  12. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed approach without considering the shared signals and the various complex logic operation. Meanwhile, CCF can also arise in the computing process.

  13. Morning surge of ventricular arrhythmias in a new arrhythmogenic canine model of chronic heart failure is associated with attenuation of time-of-day dependence of heart rate and autonomic adaptation, and reduced cardiac chaos.

    Zhu, Yujie; Hanafy, Mohamed A; Killingsworth, Cheryl R; Walcott, Gregory P; Young, Martin E; Pogwizd, Steven M

    2014-01-01

    Patients with chronic heart failure (CHF) exhibit a morning surge in ventricular arrhythmias, but the underlying cause remains unknown. The aim of this study was to determine if heart rate dynamics, autonomic input (assessed by heart rate variability (HRV)) and nonlinear dynamics as well as their abnormal time-of-day-dependent oscillations in a newly developed arrhythmogenic canine heart failure model are associated with a morning surge in ventricular arrhythmias. CHF was induced in dogs by aortic insufficiency & aortic constriction, and assessed by echocardiography. Holter monitoring was performed to study time-of-day-dependent variation in ventricular arrhythmias (PVCs, VT), traditional HRV measures, and nonlinear dynamics (including detrended fluctuations analysis α1 and α2 (DFAα1 & DFAα2), correlation dimension (CD), and Shannon entropy (SE)) at baseline, as well as 240 days (240 d) and 720 days (720 d) following CHF induction. LV fractional shortening was decreased at both 240 d and 720 d. Both PVCs and VT increased with CHF duration and showed a morning rise (2.5-fold & 1.8-fold increase at 6 AM-noon vs midnight-6 AM) during CHF. The morning rise in HR at baseline was significantly attenuated by 52% with development of CHF (at both 240 d & 720 d). Morning rise in the ratio of low frequency to high frequency (LF/HF) HRV at baseline was markedly attenuated with CHF. DFAα1, DFAα2, CD and SE all decreased with CHF by 31, 17, 34 and 7%, respectively. Time-of-day-dependent variations in LF/HF, CD, DFA α1 and SE, observed at baseline, were lost during CHF. Thus in this new arrhythmogenic canine CHF model, attenuated morning HR rise, blunted autonomic oscillation, decreased cardiac chaos and complexity of heart rate, as well as aberrant time-of-day-dependent variations in many of these parameters were associated with a morning surge of ventricular arrhythmias.

  14. Morning surge of ventricular arrhythmias in a new arrhythmogenic canine model of chronic heart failure is associated with attenuation of time-of-day dependence of heart rate and autonomic adaptation, and reduced cardiac chaos.

    Yujie Zhu

    Full Text Available Patients with chronic heart failure (CHF exhibit a morning surge in ventricular arrhythmias, but the underlying cause remains unknown. The aim of this study was to determine if heart rate dynamics, autonomic input (assessed by heart rate variability (HRV and nonlinear dynamics as well as their abnormal time-of-day-dependent oscillations in a newly developed arrhythmogenic canine heart failure model are associated with a morning surge in ventricular arrhythmias. CHF was induced in dogs by aortic insufficiency & aortic constriction, and assessed by echocardiography. Holter monitoring was performed to study time-of-day-dependent variation in ventricular arrhythmias (PVCs, VT, traditional HRV measures, and nonlinear dynamics (including detrended fluctuations analysis α1 and α2 (DFAα1 & DFAα2, correlation dimension (CD, and Shannon entropy (SE at baseline, as well as 240 days (240 d and 720 days (720 d following CHF induction. LV fractional shortening was decreased at both 240 d and 720 d. Both PVCs and VT increased with CHF duration and showed a morning rise (2.5-fold & 1.8-fold increase at 6 AM-noon vs midnight-6 AM during CHF. The morning rise in HR at baseline was significantly attenuated by 52% with development of CHF (at both 240 d & 720 d. Morning rise in the ratio of low frequency to high frequency (LF/HF HRV at baseline was markedly attenuated with CHF. DFAα1, DFAα2, CD and SE all decreased with CHF by 31, 17, 34 and 7%, respectively. Time-of-day-dependent variations in LF/HF, CD, DFA α1 and SE, observed at baseline, were lost during CHF. Thus in this new arrhythmogenic canine CHF model, attenuated morning HR rise, blunted autonomic oscillation, decreased cardiac chaos and complexity of heart rate, as well as aberrant time-of-day-dependent variations in many of these parameters were associated with a morning surge of ventricular arrhythmias.

  15. Semiparametric regression analysis of failure time data with dependent interval censoring.

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Bounds for the time to failure of hierarchical systems of fracture

    Gómez, J.B.; Vázquez-Prada, M.; Moreno, Y.

    1999-01-01

    an exact algebraic iterative method to compute the successive time intervals for individual breaking in systems of height n in terms of the information calculated in the previous height n - 1. As a byproduct of this method, rigorous lower and higher bounds for the time to failure of very large systems......For years limited Monte Carlo simulations have led to the suspicion that the time to failure of hierarchically organized load-transfer models of fracture is nonzero for sets of infinite size. This fact could have profound significance in engineering practice and also in geophysics. Here, we develop...

  17. The multi-class binomial failure rate model for the treatment of common-cause failures

    Hauptmanns, U.

    1995-01-01

    The impact of common cause failures (CCF) on PSA results for NPPs is in sharp contrast with the limited quality which can be achieved in their assessment. This is due to the dearth of observations and cannot be remedied in the short run. Therefore the methods employed for calculating failure rates should be devised such as to make the best use of the few available observations on CCF. The Multi-Class Binomial Failure Rate (MCBFR) Model achieves this by assigning observed failures to different classes according to their technical characteristics and applying the BFR formalism to each of these. The results are hence determined by a superposition of BFR type expressions for each class, each of them with its own coupling factor. The model thus obtained flexibly reproduces the dependence of CCF rates on failure multiplicity insinuated by the observed failure multiplicities. This is demonstrated by evaluating CCFs observed for combined impulse pilot valves in German NPPs. (orig.) [de

  18. Review of constitutive models and failure criteria for concrete

    Seo, Jeong Moon; Choun, Young Sun [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    The general behavior, constitutive models, and failure criteria of concrete are reviewed. The current constitutive models for concrete cannot satisfy all of mechanical behavior of concrete. Among several constitutive models, damage models are recommended to describe properly the structural behavior of concrete containment buildings, because failure modes and post-failure behavior are important in containment buildings. A constitutive model which can describe the concrete behavior in tension is required because the containment buildings will reach failure state due to ultimate internal pressure. Therefore, a thorough study on the behavior and models under tension stress state in concrete and reinforced concrete has to be performed. There are two types of failure criteria in containment buildings: structural failure criteria and leakage failure criteria. For reinforced or prestressed concrete containment buildings, concrete cracking does not mean the structural failure of containment building because the reinforcement or post-tensioning system is able to resist tensile stress up to yield stress. Therefore leakage failure criteria will be prior to structural failure criteria, and a strain failure criterion for concrete has to be established. 120 refs., 59 figs., 1 tabs. (Author)

  19. Centrifuge model test of rock slope failure caused by seismic excitation. Plane failure of dip slope

    Ishimaru, Makoto; Kawai, Tadashi

    2008-01-01

    Recently, it is necessary to assess quantitatively seismic safety of critical facilities against the earthquake induced rock slope failure from the viewpoint of seismic PSA. Under these circumstances, it is essential to evaluate more accurately the possibilities of rock slope failure and the potential failure boundary, which are triggered by earthquake ground motions. The purpose of this study is to analyze dynamic failure characteristics of rock slopes by centrifuge model tests for verification and improvement of the analytical methods. We conducted a centrifuge model test using a dip slope model with discontinuities limitated by Teflon sheets. The centrifugal acceleration was 50G, and the acceleration amplitude of input sin waves increased gradually at every step. The test results were compared with safety factors of the stability analysis based on the limit equilibrium concept. Resultant conclusions are mainly as follows: (1) The slope model collapsed when it was excited by the sine wave of 400gal, which was converted to real field scale, (2) Artificial discontinuities were considerably concerned in the collapse, and the type of collapse was plane failure, (3) From response acceleration records observed at the slope model, we can say that tension cracks were generated near the top of the slope model during excitation, and that might be cause of the collapse, (4) By considering generation of the tension cracks in the stability analysis, correspondence of the analytical results and the experimental results improved. From the obtained results, we need to consider progressive failure in evaluating earthquake induced rock slope failure. (author)

  20. [Hazard function and life table: an introduction to the failure time analysis].

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  1. A new method for explicit modelling of single failure event within different common cause failure groups

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Redundancy and diversity are the main principles of the safety systems in the nuclear industry. Implementation of safety components redundancy has been acknowledged as an effective approach for assuring high levels of system reliability. The existence of redundant components, identical in most of the cases, implicates a probability of their simultaneous failure due to a shared cause—a common cause failure. This paper presents a new method for explicit modelling of single component failure event within multiple common cause failure groups simultaneously. The method is based on a modification of the frequently utilised Beta Factor parametric model. The motivation for development of this method lays in the fact that one of the most widespread softwares for fault tree and event tree modelling as part of the probabilistic safety assessment does not comprise the option for simultaneous assignment of single failure event to multiple common cause failure groups. In that sense, the proposed method can be seen as an advantage of the explicit modelling of common cause failures. A standard standby safety system is selected as a case study for application and study of the proposed methodology. The results and insights implicate improved, more transparent and more comprehensive models within probabilistic safety assessment.

  2. Reliability model for common mode failures in redundant safety systems

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  3. Data analysis using the Binomial Failure Rate common cause model

    Atwood, C.L.

    1983-09-01

    This report explains how to use the Binomial Failure Rate (BFR) method to estimate common cause failure rates. The entire method is described, beginning with the conceptual model, and covering practical issues of data preparation, treatment of variation in the failure rates, Bayesian estimation of the quantities of interest, checking the model assumptions for lack of fit to the data, and the ultimate application of the answers

  4. The Influence of a High Salt Diet on a Rat Model of Isoproterenol-Induced Heart Failure

    Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4 weeks) isoproterenol (ISO) infusion exacerbated cardiomyopathy in Spontaneously Hypertensive Heart Failure (SHHF) rats. Others have shown...

  5. Modeling Complex Time Limits

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  6. Total time on test processes and applications to failure data analysis

    Barlow, R.E.; Campo, R.

    1975-01-01

    This paper describes a new method for analyzing data. The method applies to non-negative observations such as times to failure of devices and survival times of biological organisms and involves a plot of the data. These plots are useful in choosing a probabilistic model to represent the failure behavior of the data. They also furnish information about the failure rate function and aid in its estimation. An important feature of these data plots is that incomplete data can be analyzed. The underlying random variables are, however, assumed to be independent and identically distributed. The plots have a theoretical basis, and converge to a transform of the underlying probability distribution as the sample size increases

  7. Universal failure model for multi-unit systems with shared functionality

    Volovoi, Vitali

    2013-01-01

    A Universal Failure Model (UFM) is proposed for complex systems that rely on a large number of entities for performing a common function. Economy of scale or other considerations may dictate the need to pool resources for common purpose, but the resulting strong coupling precludes the grouping of those components into modules. Existing system-level failure models rely on modularity for reducing modeling complexity, so the UFM will fill an important gap in constructing efficient system-level models. Conceptually, the UFM resembles cellular automata (CA) infused with realistic failure mechanisms. Components’ behavior is determined based on the balance between their strength (capacity) and their load (demand) share. If the load exceeds the components’ capacity, the component fails and its load share is distributed among its neighbors (possibly with a time delay and load losses). The strength of components can degrade with time if the load exceeds an elastic threshold. The global load (demand) carried by the system can vary over time, with the peak values providing shocks to the system (e.g., wind loads in civil structures, electricity demand, stressful activities to human bodies, or drought in an ecosystem). Unlike the models traditionally studied by CA, the focus of the presented model is on the system reliability, and specifically on the study of time-to-failure distributions, rather than steady-state patterns and average time-to-failure characteristics. In this context, the relationships between the types of failure distributions and the parameters of the failure model are discussed

  8. Probability of Loss of Assured Safety in Systems with Multiple Time-Dependent Failure Modes: Incorporation of Delayed Link Failure in the Presence of Aleatory Uncertainty.

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximation and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.

  9. Pig models for the human heart failure syndrome

    Hunter, Ingrid; Terzic, Dijana; Zois, Nora Elisabeth

    2014-01-01

    Human heart failure remains a challenging illness despite advances in the diagnosis and treatment of heart failure patients. There is a need for further improvement of our understanding of the failing myocardium and its molecular deterioration. Porcine models provide an important research tool...... in this respect as molecular changes can be examined in detail, which is simply not feasible in human patients. However, the human heart failure syndrome is based on symptoms and signs, where pig models mostly mimic the myocardial damage, but without decisive data on clinical presentation and, therefore, a heart...... to elucidate the human heart failure syndrome....

  10. A quasi-static algorithm that includes effects of characteristic time scales for simulating failures in brittle materials

    Liu, Jinxing; El Sayed, Tamer S.

    2013-01-01

    When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice

  11. Strong exploration of a cast iron pipe failure model

    Moglia, M.; Davis, P.; Burn, S.

    2008-01-01

    A physical probabilistic failure model for buried cast iron pipes is described, which is based on the fracture mechanics of the pipe failure process. Such a model is useful in the asset management of buried pipelines. The model is then applied within a Monte-Carlo simulation framework after adding stochasticity to input variables. Historical failure rates are calculated based on a database of 81,595 pipes and their recorded failures, and model parameters are chosen to provide the best fit between historical and predicted failure rates. This provides an estimated corrosion rate distribution, which agrees well with experimental results. The first model design was chosen in a deliberate simplistic fashion in order to allow for further strong exploration of model assumptions. Therefore, first runs of the initial model resulted in a poor quantitative and qualitative fit in regards to failure rates. However, by exploring natural additional assumptions such as relating to stochastic loads, a number of assumptions were chosen which improved the model to a stage where an acceptable fit was achieved. The model bridges the gap between micro- and macro-level, and this is the novelty in the approach. In this model, data can be used both from the macro-level in terms of failure rates, as well as from the micro-level such as in terms of corrosion rates

  12. A RAT MODEL OF HEART FAILURE INDUCED BY ISOPROTERENOL AND A HIGH SALT DIET

    Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4wk) isoproterenol (ISO) infusion in Spontaneously Hypertensive Heart Failure (SHHF) rats caused cardiac injury with minimal hypertrophy. O...

  13. Modeling the failure data of a repairable equipment with bathtub type failure intensity

    Pulcini, G.

    2001-01-01

    The paper deals with the reliability modeling of the failure process of large and complex repairable equipment whose failure intensity shows a bathtub type non-monotonic behavior. A non-homogeneous Poisson process arising from the superposition of two power law processes is proposed, and the characteristics and mathematical details of the proposed model are illustrated. A graphical approach is also presented, which allows to determine whether the proposed model can adequately describe a given failure data. A graphical method for obtaining crude but easy estimates of the model parameters is then illustrated, as well as more accurate estimates based on the maximum likelihood method are provided. Finally, two numerical applications are given to illustrate the proposed model and the estimation procedures

  14. A probability model for the failure of pressure containing parts

    Thomas, H.M.

    1978-01-01

    The model provides a method of estimating the order of magnitude of the leakage failure probability of pressure containing parts. It is a fatigue based model which makes use of the statistics available for both specimens and vessels. Some novel concepts are introduced but essentially the model simply quantifies the obvious i.e. that failure probability increases with increases in stress levels, number of cycles, volume of material and volume of weld metal. A further model based on fracture mechanics estimates the catastrophic fraction of leakage failures. (author)

  15. Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods

    Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed

    2018-04-01

    This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.

  16. Prediction of the time-dependent failure rate for normally operating components taking into account the operational history

    Vrbanic, I.; Simic, Z.; Sljivac, D.

    2008-01-01

    The prediction of the time-dependent failure rate has been studied, taking into account the operational history of a component used in applications such as system modeling in a probabilistic safety analysis in order to evaluate the impact of equipment aging and maintenance strategies on the risk measures considered. We have selected a time-dependent model for the failure rate which is based on the Weibull distribution and the principles of proportional age reduction by equipment overhauls. Estimation of the parameters that determine the failure rate is considered, including the definition of the operational history model and likelihood function for the Bayesian analysis of parameters for normally operating repairable components. The operational history is provided as a time axis with defined times of overhauls and failures. An example for demonstration is described with prediction of the future behavior for seven different operational histories. (orig.)

  17. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  18. Modelling the failure risk for water supply networks with interval-censored data

    García-Mora, B.; Debón, A.; Santamaría, C.; Carrión, A.

    2015-01-01

    In reliability, sometimes some failures are not observed at the exact moment of the occurrence. In that case it can be more convenient to approximate them by a time interval. In this study, we have used a generalized non-linear model developed for interval-censored data to treat the life time of a pipe from its time of installation until its failure. The aim of this analysis was to identify those network characteristics that may affect the risk of failure and we make an exhaustive validation of this analysis. The results indicated that certain characteristics of the network negatively affected the risk of failure of the pipe: an increase in the length and pressure of the pipes, a small diameter, some materials used in the manufacture of pipes and the traffic on the street where the pipes are located. Once the model has been correctly fitted to our data, we also provided simple tables that will allow companies to easily calculate the pipe's probability of failure in a future. - Highlights: • We model the first failure time in a water supply company from Spain. • We fit arbitrarily interval-censored data with a generalized non-linear model. • The results are validated. We provide simple tables to easily calculate probabilities of no failure at different times.

  19. A Zebrafish Heart Failure Model for Assessing Therapeutic Agents.

    Zhu, Xiao-Yu; Wu, Si-Qi; Guo, Sheng-Ya; Yang, Hua; Xia, Bo; Li, Ping; Li, Chun-Qi

    2018-03-20

    Heart failure is a leading cause of death and the development of effective and safe therapeutic agents for heart failure has been proven challenging. In this study, taking advantage of larval zebrafish, we developed a zebrafish heart failure model for drug screening and efficacy assessment. Zebrafish at 2 dpf (days postfertilization) were treated with verapamil at a concentration of 200 μM for 30 min, which were determined as optimum conditions for model development. Tested drugs were administered into zebrafish either by direct soaking or circulation microinjection. After treatment, zebrafish were randomly selected and subjected to either visual observation and image acquisition or record videos under a Zebralab Blood Flow System. The therapeutic effects of drugs on zebrafish heart failure were quantified by calculating the efficiency of heart dilatation, venous congestion, cardiac output, and blood flow dynamics. All 8 human heart failure therapeutic drugs (LCZ696, digoxin, irbesartan, metoprolol, qiliqiangxin capsule, enalapril, shenmai injection, and hydrochlorothiazide) showed significant preventive and therapeutic effects on zebrafish heart failure (p failure model developed and validated in this study could be used for in vivo heart failure studies and for rapid screening and efficacy assessment of preventive and therapeutic drugs.

  20. Failure Analysis of Nonvolatile Residue (NVR) Analyzer Model SP-1000

    Potter, Joseph C.

    2011-01-01

    National Aeronautics and Space Administration (NASA) subcontractor Wiltech contacted the NASA Electrical Lab (NE-L) and requested a failure analysis of a Solvent Purity Meter; model SP-IOOO produced by the VerTis Instrument Company. The meter, used to measure the contaminate in a solvent to determine the relative contamination on spacecraft flight hardware and ground servicing equipment, had been inoperable and in storage for an unknown amount of time. NE-L was asked to troubleshoot the unit and make a determination on what may be required to make the unit operational. Through the use of general troubleshooting processes and the review of a unit in service at the time of analysis, the unit was found to be repairable but would need the replacement of multiple components.

  1. Modelling bursty time series

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  2. Machine learning in heart failure: ready for prime time.

    Awan, Saqib Ejaz; Sohel, Ferdous; Sanfilippo, Frank Mario; Bennamoun, Mohammed; Dwivedi, Girish

    2018-03-01

    The aim of this review is to present an up-to-date overview of the application of machine learning methods in heart failure including diagnosis, classification, readmissions and medication adherence. Recent studies have shown that the application of machine learning techniques may have the potential to improve heart failure outcomes and management, including cost savings by improving existing diagnostic and treatment support systems. Recently developed deep learning methods are expected to yield even better performance than traditional machine learning techniques in performing complex tasks by learning the intricate patterns hidden in big medical data. The review summarizes the recent developments in the application of machine and deep learning methods in heart failure management.

  3. Validation of the Seattle Heart Failure Model (SHFM) in Heart Failure Population

    Hussain, S.; Kayani, A.M.; Munir, R.

    2014-01-01

    Objective: To determine the effectiveness of Seattle Heart Failure Model (SHFM) in a Pakistani systolic heart failure cohort in predicting mortality in this population. Study Design: Cohort study. Place and Duration of Study: The Armed Forces Institute of Cardiology - National Institute of Heart Diseases, Rawalpindi, from March 2011 to March 2012. Methodology: One hundred and eighteen patients with heart failure (HF) from the registry were followed for one year. Their 1-year mortality was calculated using the SHFM software on their enrollment into the registry. After 1-year predicted 1-year mortality was compared with the actual 1-year mortality of these patients. Results: The mean age was 41.6 +- 14.9 years (16 - 78 years). There were 73.7% males and 26.3% females. One hundred and fifteen patients were in NYHA class III or IV. Mean ejection fraction in these patients was 23 +- 9.3%. Mean brain natriuretic peptide levels were 1230 A+- 1214 pg/mL. Sensitivity of the model was 89.3% with 71.1% specificity, 49% positive predictive value and 95.5% negative predictive value. The accuracy of the model was 75.4%. In Roc analysis, AUC for the SHFM was 0.802 (p<0.001). conclusion: SHFM was found to be reliable in predicting one year mortality among patients with heart failure in the pakistan patients. (author)

  4. VALIDATION OF SPRING OPERATED PRESSURE RELIEF VALVE TIME TO FAILURE AND THE IMPORTANCE OF STATISTICALLY SUPPORTED MAINTENANCE INTERVALS

    Gross, R; Stephen Harris, S

    2009-02-18

    The Savannah River Site operates a Relief Valve Repair Shop certified by the National Board of Pressure Vessel Inspectors to NB-23, The National Board Inspection Code. Local maintenance forces perform inspection, testing, and repair of approximately 1200 spring-operated relief valves (SORV) each year as the valves are cycled in from the field. The Site now has over 7000 certified test records in the Computerized Maintenance Management System (CMMS); a summary of that data is presented in this paper. In previous papers, several statistical techniques were used to investigate failure on demand and failure rates including a quantal response method for predicting the failure probability as a function of time in service. The non-conservative failure mode for SORV is commonly termed 'stuck shut'; industry defined as the valve opening at greater than or equal to 1.5 times the cold set pressure. Actual time to failure is typically not known, only that failure occurred some time since the last proof test (censored data). This paper attempts to validate the assumptions underlying the statistical lifetime prediction results using Monte Carlo simulation. It employs an aging model for lift pressure as a function of set pressure, valve manufacturer, and a time-related aging effect. This paper attempts to answer two questions: (1) what is the predicted failure rate over the chosen maintenance/ inspection interval; and do we understand aging sufficient enough to estimate risk when basing proof test intervals on proof test results?

  5. Travel time reliability modeling.

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  6. A Macaca mulatta model of fulminant hepatic failure

    Ping Zhou; Hong Bu; Jie Xia; Gang Guo; Li Li; Yu-Jun Shi; Zi-Xing Huang; Qiang Lu; Hong-Xia Li

    2012-01-01

    AIM: To establish an appropriate primate model of fulminant hepatic failure (FHF). METHODS: We have, for the first time, established a large animal model of FHF in Macaca mulatta by intraperitoneal infusion of amatoxin and endotoxin. Clinical features, biochemical indexes, histopathology and iconography were examined to dynamically investigate the progress and outcome of the animal model. RESULTS: Our results showed that the enzymes and serum bilirubin were markedly increased and the enzyme-bilirubin segregation emerged 36 h after toxin administration. Coagulation activity was significantly decreased. Gradually deteriorated parenchymal abnormality was detected by magnetic resonance imaging (MRI) and ultrasonography at 48 h. The liver biopsy showed marked hepatocyte steatosis and massive parenchymal necrosis at 36 h and 49 h, respectively. The autopsy showed typical yellow atrophy of the liver. Hepatic encephalopathy of the models was also confirmed by hepatic coma, MRI and pathological changes of cerebral edema. The lethal effects of the extrahepatic organ dysfunction were ruled out by their biochemical indices, imaging and histopathology. CONCLUSION: We have established an appropriate large primate model of FHF, which is closely similar to clinic cases, and can be used for investigation of the mechanism of FHF and for evaluation of potential medical therapies.

  7. Weibull Model Allowing Nearly Instantaneous Failures

    C. D. Lai

    2007-01-01

    expressed as a mixture of the uniform distribution and the Weibull distribution. Properties of the resulting distribution are derived; in particular, the probability density function, survival function, and the hazard rate function are obtained. Some selected plots of these functions are also presented. An R script was written to fit the model parameters. An application of the modified model is illustrated.

  8. Failure Recovery via RESTART: Wallclock Models

    Asmussen, Søren; Rønn-Nielsen, Anders

    A task such as the execution of a computer program or the transfer of a file on a communications link may fail and then needs to be restarted. Let the ideal task time be a constant $\\ell$ and the actual task time $X$, a random variable. Tail asymptotics for $\\mathbb{P}(X>x)$ is given under three ...

  9. Comparing risk of failure models in water supply networks using ROC curves

    Debon, A.; Carrion, A.; Cabrera, E.; Solano, H.

    2010-01-01

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  10. Comparing risk of failure models in water supply networks using ROC curves

    Debon, A., E-mail: andeau@eio.upv.e [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Carrion, A. [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Cabrera, E. [Dpto. De Ingenieria Hidraulica Y Medio Ambiente, Instituto Tecnologico del Agua, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Solano, H. [Universidad Diego Portales, Santiago (Chile)

    2010-01-15

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  11. Wood-adhesive bonding failure : modeling and simulation

    Zhiyong Cai

    2010-01-01

    The mechanism of wood bonding failure when exposed to wet conditions or wet/dry cycles is not fully understood and the role of the resulting internal stresses exerted upon the wood-adhesive bondline has yet to be quantitatively determined. Unlike previous modeling this study has developed a new two-dimensional internal-stress model on the basis of the mechanics of...

  12. Matching the results of a theoretical model with failure rates obtained from a population of non-nuclear pressure vessels

    Harrop, L.P.

    1982-02-01

    Failure rates for non-nuclear pressure vessel populations are often regarded as showing a decrease with time. Empirical evidence can be cited which supports this view. On the other hand theoretical predictions of PWR type reactor pressure vessel failure rates have shown an increasing failure rate with time. It is shown that these two situations are not necessarily incompatible. If adjustments are made to the input data of the theoretical model to treat a non-nuclear pressure vessel population, the model can produce a failure rate which decreases with time. These adjustments are explained and the results obtained are shown. (author)

  13. Failure Diameter of PBX 9502: Simulations with the SURFplus model

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-03

    SURFplus is a reactive burn model for high explosives aimed at modelling shock initiation and propagation of detonation waves. It utilizes the SURF model for the fast hot-spot reaction plus a slow reaction for the energy released by carbon clustering. A feature of the SURF model is that there is a partially decoupling between burn rate parameters and detonation wave properties. Previously, parameters for PBX 9502 that control shock ini- tiation had been calibrated to Pop plot data (distance-of-run to detonation as a function of shock pressure initiating the detonation). Here burn rate parameters for the high pres- sure regime are adjusted to t the failure diameter and the limiting detonation speed just above the failure diameter. Simulated results are shown for an uncon ned rate stick when the 9502 diameter is slightly above and slightly below the failure diameter. Just above the failure diameter, in the rest frame of the detonation wave, the front is sonic at the PBX/air interface. As a consequence, the lead shock in the neighborhood of the interface is supported by the detonation pressure in the interior of the explosive rather than the reaction immediately behind the front. In the interior, the sonic point occurs near the end of the fast hot-spot reaction. Consequently, the slow carbon clustering reaction can not a ect the failure diameter. Below the failure diameter, the radial extent of the detonation front decreases starting from the PBX/air interface. That is, the failure starts at the PBX boundary and propagates inward to the axis of the rate stick.

  14. Failure Propagation Modeling and Analysis via System Interfaces

    Lin Zhao

    2016-01-01

    Full Text Available Safety-critical systems must be shown to be acceptably safe to deploy and use in their operational environment. One of the key concerns of developing safety-critical systems is to understand how the system behaves in the presence of failures, regardless of whether that failure is triggered by the external environment or caused by internal errors. Safety assessment at the early stages of system development involves analysis of potential failures and their consequences. Increasingly, for complex systems, model-based safety assessment is becoming more widely used. In this paper we propose an approach for safety analysis based on system interface models. By extending interaction models on the system interface level with failure modes as well as relevant portions of the physical system to be controlled, automated support could be provided for much of the failure analysis. We focus on fault modeling and on how to compute minimal cut sets. Particularly, we explore state space reconstruction strategy and bounded searching technique to reduce the number of states that need to be analyzed, which remarkably improves the efficiency of cut sets searching algorithm.

  15. Computational modeling for hexcan failure under core distruptive accidental conditions

    Sawada, T.; Ninokata, H.; Shimizu, A. [Tokyo Institute of Technology (Japan)

    1995-09-01

    This paper describes the development of computational modeling for hexcan wall failures under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH has been analyzed by using the SIMMER-II code. The SIMBATH experiments were performed at KfK in Germany. The experiments used a thermite mixture to simulate fuel. The test geometry of SIMBATH ranged from single pin to 37-pin bundles. In this study, phenomena of hexcan wall failure found in a SIMBATH test were analyzed by SIMMER-II. Although the original model of SIMMER-II did not calculate any hexcan failure, several simple modifications made it possible to reproduce the hexcan wall melt-through observed in the experiment. In this paper the modifications and their significance are discussed for further modeling improvements.

  16. Toward a predictive model for the failure of elastomer seals.

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash; Baker Hughes Collaboration

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas industry. During exposure to the extreme temperatures and pressures typical of well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. Using computer simulations, we investigate this problem at two different length- and time-scales. First, starting with our model of NBR based on the OPLS all-atom force-field, we develop a chemically-inspired description of HNBR, where C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, a coarse-grained approach is taken in order to study mechanical behaviour and to overcome the length- and time-scale limitations inherent to the all-atom model. The effect of nanoparticle fillers added to the elastomer matrix is investigated. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions. Baker Hughes.

  17. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  18. A cascading failure model for analyzing railway accident causation

    Liu, Jin-Tao; Li, Ke-Ping

    2018-01-01

    In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.

  19. Reaction Times to Consecutive Automation Failures: A Function of Working Memory and Sustained Attention.

    Jipp, Meike

    2016-12-01

    This study explored whether working memory and sustained attention influence cognitive lock-up, which is a delay in the response to consecutive automation failures. Previous research has demonstrated that the information that automation provides about failures and the time pressure that is associated with a task influence cognitive lock-up. Previous research has also demonstrated considerable variability in cognitive lock-up between participants. This is why individual differences might influence cognitive lock-up. The present study tested whether working memory-including flexibility in executive functioning-and sustained attention might be crucial in this regard. Eighty-five participants were asked to monitor automated aircraft functions. The experimental manipulation consisted of whether or not an initial automation failure was followed by a consecutive failure. Reaction times to the failures were recorded. Participants' working-memory and sustained-attention abilities were assessed with standardized tests. As expected, participants' reactions to consecutive failures were slower than their reactions to initial failures. In addition, working-memory and sustained-attention abilities enhanced the speed with which participants reacted to failures, more so with regard to consecutive than to initial failures. The findings highlight that operators with better working memory and sustained attention have small advantages when initial failures occur, but their advantages increase across consecutive failures. The results stress the need to consider personnel selection strategies to mitigate cognitive lock-up in general and training procedures to enhance the performance of low ability operators. © 2016, Human Factors and Ergonomics Society.

  20. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  1. Modelling and Verifying Communication Failure of Hybrid Systems in HCSP

    Wang, Shuling; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    Hybrid systems are dynamic systems with interacting discrete computation and continuous physical processes. They have become ubiquitous in our daily life, e.g. automotive, aerospace and medical systems, and in particular, many of them are safety-critical. For a safety-critical hybrid system......, in the presence of communication failure, the expected control from the controller will get lost and as a consequence the physical process cannot behave as expected. In this paper, we mainly consider the communication failure caused by the non-engagement of one party in communication action, i.......e. the communication itself fails to occur. To address this issue, this paper proposes a formal framework by extending HCSP, a formal modeling language for hybrid systems, for modeling and verifying hybrid systems in the absence of receiving messages due to communication failure. We present two inference systems...

  2. Fuzzy modeling of analytical redundancy for sensor failure detection

    Tsai, T.M.; Chou, H.P.

    1991-01-01

    Failure detection and isolation (FDI) in dynamic systems may be accomplished by testing the consistency of the system via analytically redundant relations. The redundant relation is basically a mathematical model relating system inputs and dissimilar sensor outputs from which information is extracted and subsequently examined for the presence of failure signatures. Performance of the approach is often jeopardized by inherent modeling error and noise interference. To mitigate such effects, techniques such as Kalman filtering, auto-regression-moving-average (ARMA) modeling in conjunction with probability tests are often employed. These conventional techniques treat the stochastic nature of uncertainties in a deterministic manner to generate best-estimated model and sensor outputs by minimizing uncertainties. In this paper, the authors present a different approach by treating the effect of uncertainties with fuzzy numbers. Coefficients in redundant relations derived from first-principle physical models are considered as fuzzy parameters and on-line updated according to system behaviors. Failure detection is accomplished by examining the possibility that a sensor signal occurred in an estimated fuzzy domain. To facilitate failure isolation, individual FDI monitors are designed for each interested sensor

  3. Gap timing and the spectral timing model.

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  4. Most Probable Failures in LHC Magnets and Time Constants of their Effects on the Beam.

    Gomez Alonso, Andres

    2006-01-01

    During the LHC operation, energies up to 360 MJ will be stored in each proton beam and over 10 GJ in the main electrical circuits. With such high energies, beam losses can quickly lead to important equipment damage. The Machine Protection Systems have been designed to provide reliable protection of the LHC through detection of the failures leading to beam losses and fast dumping of the beams. In order to determine the protection strategies, it is important to know the time constants of the failure effects on the beam. In this report, we give an estimation of the time constants of quenches and powering failures in LHC magnets. The most critical failures are powering failures in certain normal conducting circuits, leading to relevant effects on the beam in ~1 ms. The failures on super conducting magnets leading to fastest losses are quenches. In this case, the effects on the beam can be signficant ~10 ms after the quench occurs.

  5. A physical probabilistic model to predict failure rates in buried PVC pipelines

    Davis, P.; Burn, S.; Moglia, M.; Gould, S.

    2007-01-01

    For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical

  6. 2D Modeling of Flood Propagation due to the Failure of Way Ela Natural Dam

    Yakti Bagus Pramono

    2018-01-01

    Full Text Available A dam break induced-flood propagation modeling is needed to reduce the losses of any potential dam failure. On the 25 July 2013, there was a dam break generated flood due to the failure of Way Ela Natural Dam that severely damaged houses and various public facilities. This study simulated the flooding induced by the failure of Way Ela Natural Dam. A two-dimensional (2D numerical model, HEC-RAS v.5, is used to simulate the overland flow. The dam failure itself is simulated using HECHMSv.4. The results of this study, the flood inundation, flood depth, and flood arrival time are verified by using available secondary data. These informations are very important to propose mitigation plans with respect to possible dam break in the future.

  7. Convex models and probabilistic approach of nonlinear fatigue failure

    Qiu Zhiping; Lin Qiang; Wang Xiaojun

    2008-01-01

    This paper is concerned with the nonlinear fatigue failure problem with uncertainties in the structural systems. In the present study, in order to solve the nonlinear problem by convex models, the theory of ellipsoidal algebra with the help of the thought of interval analysis is applied. In terms of the inclusion monotonic property of ellipsoidal functions, the nonlinear fatigue failure problem with uncertainties can be solved. A numerical example of 25-bar truss structures is given to illustrate the efficiency of the presented method in comparison with the probabilistic approach

  8. A Study on Estimating the Next Failure Time of Compressor Equipment in an Offshore Plant

    SangJe Cho

    2016-01-01

    Full Text Available The offshore plant equipment usually has a long life cycle. During its O&M (Operation and Maintenance phase, since the accidental occurrence of offshore plant equipment causes catastrophic damage, it is necessary to make more efforts for managing critical offshore equipment. Nowadays, due to the emerging ICTs (Information Communication Technologies, it is possible to send health monitoring information to administrator of an offshore plant, which leads to much concern on CBM (Condition-Based Maintenance. This study introduces three approaches for predicting the next failure time of offshore plant equipment (gas compressor with case studies, which are based on finite state continuous time Markov model, linear regression method, and their hybrid model.

  9. Asymptotic behavior of total times For jobs that must start over if a failure occurs

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures to the total time when...... including failures, for any failure distribution. In particular, we show that if the task time distribution has an unbounded support then the total time distribution H is always heavy-tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  10. Asymptotic behaviour of total times for jobs that must start over if a failure occurs

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    2008-01-01

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures and the total time when...... including failures, for any failure distribution. In particular, we show that if the task-time distribution has an unbounded support, then the total-time distribution H is always heavy tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  11. Clinical findings and survival time in dogs with advanced heart failure.

    Beaumier, Amelie; Rush, John E; Yang, Vicky K; Freeman, Lisa M

    2018-04-10

    Dogs with advanced heart failure are a clinical challenge for veterinarians but there are no studies reporting clinical features and outcome of this population. To describe clinical findings and outcome of dogs with advanced heart failure caused by degenerative mitral valve disease (DMVD). Fifty-four dogs with advanced heart failure because of DMVD. For study purposes, advanced heart failure was defined as recurrence of congestive heart failure signs despite receiving the initially prescribed dose of pimobendan, angiotensin-converting-enzyme inhibitor (ACEI), and furosemide >4 mg/kg/day. Data were collected for the time of diagnosis of Stage C heart failure and time of diagnosis of advanced heart failure. Date of death was recorded. At the diagnosis of advanced heart failure, doses of pimobendan (n = 30), furosemide (n = 28), ACEI (n = 13), and spironolactone (n = 4) were increased, with ≥1 new medications added in most dogs. After initial diagnosis of advanced heart failure, 38 (70%) dogs had additional medications adjustments (median = 2 [range, 0-27]), with the final total medication number ranging from 2-10 (median = 5). Median survival time after diagnosis of advanced heart failure was 281 days (range, 3-885 days). Dogs receiving a furosemide dose >6.70 mg/kg/day had significantly longer median survival times (402 days [range, 3-885 days] versus 129 days [range 9-853 days]; P = .017). Dogs with advanced heart failure can have relatively long survival times. Higher furosemide dose and non-hospitalization were associated with longer survival. Copyright © 2018 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  12. Discrete competing risk model with application to modeling bus-motor failure data

    Jiang, R.

    2010-01-01

    Failure data are often modeled using continuous distributions. However, a discrete distribution can be appropriate for modeling interval or grouped data. When failure data come from a complex system, a simple discrete model can be inappropriate for modeling such data. This paper presents two types of discrete distributions. One is formed by exponentiating an underlying distribution, and the other is a two-fold competing risk model. The paper focuses on two special distributions: (a) exponentiated Poisson distribution and (b) competing risk model involving a geometric distribution and an exponentiated Poisson distribution. The competing risk model has a decreasing-followed-by-unimodal mass function and a bathtub-shaped failure rate. Five classical data sets on bus-motor failures can be simultaneously and appropriately fitted by a general 5-parameter competing risk model with the parameters being functions of the number of successive failures. The lifetime and aging characteristics of the fitted distribution are analyzed.

  13. Balancing burn-in and mission times in environments with catastrophic and repairable failures

    Bebbington, Mark; Lai, C.-D.; Zitikis, Ricardas

    2009-01-01

    In a system subject to both repairable and catastrophic (i.e., nonrepairable) failures, 'mission success' can be defined as operating for a specified time without a catastrophic failure. We examine the effect of a burn-in process of duration τ on the mission time x, and also on the probability of mission success, by introducing several functions and surfaces on the (τ,x)-plane whose extrema represent suitable choices for the best burn-in time, and the best burn-in time for a desired mission time. The corresponding curvature functions and surfaces provide information about probabilities and expectations related to these burn-in and mission times. Theoretical considerations are illustrated with both parametric and, separating the failures by failure mode, nonparametric analyses of a data set, and graphical visualization of results.

  14. Dam failure analysis/calibration using NWS models on dam failure in Alton, New Hampshire

    Capone, E.J.

    1998-01-01

    The State of New Hampshire Water Resources Board, the United States Geological Service, and private concerns have compiled data on the cause of a catastrophic failure of the Bergeron Dam in Alton, New Hampshire in March of 1996. Data collected related to the cause of the breach, the breach parameters, the soil characteristics of the failed section, and the limits of downstream flooding. Dam break modeling software was used to calibrate and verify the simulated flood-wave caused by the Bergeron Dam breach. Several scenarios were modeled, using different degrees of detail concerning the topography/channel-geometry of the affected areas. A sensitivity analysis of the important output parameters was completed. The relative importance of model parameters on the results was assessed against the background of observed historical events

  15. Multi-scale modeling of ductile failure in metallic alloys

    Pardoen, Th.; Scheyvaerts, F.; Simar, A.; Tekoglu, C.; Onck, P.R.

    2010-01-01

    Micro-mechanical models for ductile failure have been developed in the seventies and eighties essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale. (authors)

  16. Multiscale modeling of ductile failure in metallic alloys

    Pardoen, Thomas; Scheyvaerts, Florence; Simar, Aude; Tekoğlu, Cihan; Onck, Patrick R.

    2010-04-01

    Micromechanical models for ductile failure have been developed in the 1970s and 1980s essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale.

  17. Online Real-Time Tribology Failure Detection System, Phase I

    National Aeronautics and Space Administration — The investigation of the coating friction as a function of time is important to monitor the ball bearing heath. Despite the importance of the subject mater, there is...

  18. Failure prediction using machine learning and time series in optical network.

    Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo

    2017-08-07

    In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.

  19. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property value at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.

  20. Cascading failures in interdependent systems under a flow redistribution model

    Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  1. Degradation failure model of self-healing metallized film pulse capacitor

    Sun Quan; Zhong Zheng; Zhou Jinglun; Zhao Jianyin; Wei Xiaofeng; Guo Liangfu; Zhou Pizhang; Li Yizheng; Chen Dehuai

    2004-01-01

    The high energy density self-healing metallized film pulse capacitor has been applied to all kinds of laser facilities for their power conditioning systems, whose reliability and expense are straightforwardly affected by the reliability level of the capacitors. Based on the related research in literature, this paper analyses the degradation mechanism of the capacitor, and presents a new degradation failure model--the Gauss-Poisson model. The Gauss-Poisson model divides degradation of capacitor into naturalness degradation and outburst one. Compared with traditional Weibull failure model, the new model is more precise in evaluating the lifetime of the capacitor, and the life tests for this model are simple in design, and lower in the cost of time or expense. The Gauss-Poisson model will be a fine and widely used degradation disable model. (author)

  2. Pin failure modeling of the A series CABRI tests

    Young, M.F.; Portugal, J.L.

    1978-01-01

    The EXPAND pin fialure model, a research tool designed to model pin failure under prompt burst conditions, has been used to predict failure conditions for several of the A series CABRI tests as part of the United States participation in the CABRI Joint Project. The Project is an international program involving France, Germany, England, Japan, and the United States and has the goal of obtaining experimental data relating to the safety of LMFBR's. The A series, designed to simulate high ramp rate TOP conditions, initially utilizes single, fresh UO 2 pins of the PHENIX type in a flowing sodium loop. The pins are preheated at constant power in the CABRI reactor to establish steady state conditions (480 w/cm at the axial peak) and then subjected to a power pulse of 14 ms to 24 ms duration

  3. Time-to-Furosemide Treatment and Mortality in Patients Hospitalized With Acute Heart Failure

    Matsue, Yuya; Damman, Kevin; Voors, Adriaan A.; Kagiyama, Nobuyuki; Yamaguchi, Tetsuo; Kuroda, Shunsuke; Okumura, Takahiro; Kida, Keisuke; Mizuno, Atsushi; Oishi, Shogo; Inuzuka, Yasutaka; Akiyama, Eiichi; Matsukawa, Ryuichi; Kato, Kota; Suzuki, Satoshi; Naruke, Takashi; Yoshioka, Kenji; Miyoshi, Tatsuya; Baba, Yuichi; Yamamoto, Masayoshi; Murai, Koji; Mizutani, Kazuo; Yoshida, Kazuki; Kitai, Takeshi

    2017-01-01

    BACKGROUND Acute heart failure (AHF) is a life-threatening disease requiring urgent treatment, including a recommendation for immediate initiation of loop diuretics. OBJECTIVES The authors prospectively evaluated the association between time-to-diuretic treatment and clinical outcome. METHODS

  4. Fold catastrophe model of dynamic pillar failure in asymmetric mining

    Yue Pan; Ai-wu Li; Yun-song Qi [Qingdao Technological University, Qingdao (China). College of Civil Engineering

    2009-01-15

    A rock burst disaster not only destroys the pit facilities and results in economic loss but it also threatens the life of the miners. Pillar rock burst has a higher frequency of occurrence in the pit compared to other kinds of rock burst. Understanding the cause, magnitude and prevention of pillar rock burst is a significant undertaking. Equations describing the bending moment and displacement of the rock beam in asymmetric mining have been deduced for simplified asymmetric beam-pillar systems. Using the symbolic operation software MAPLE 9.5 a catastrophe model of the dynamic failure of an asymmetric rock-beam pillar system has been established. The differential form of the total potential function deduced from the law of conservation of energy was used for this deduction. The critical conditions and the initial and final positions of the pillar during failure have been given in analytical form. The amount of elastic energy released by the rock beam at the instant of failure is determined as well. A diagrammatic form showing the pillar failure was plotted using MATLAB software. This graph contains a wealth of information and is important for understanding the behavior during each deformation phase of the rock-beam pillar system. The graphic also aids in distinguishing the equivalent stiffness of the rock beam in different directions. 11 refs., 8 figs.

  5. Modelling urban travel times

    Zheng, F.

    2011-01-01

    Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain

  6. Canonical failure modes of real-time control systems: insights from cognitive theory

    Wallace, Rodrick

    2016-04-01

    Newly developed necessary conditions statistical models from cognitive theory are applied to generalisation of the data-rate theorem for real-time control systems. Rather than graceful degradation under stress, automatons and man/machine cockpits appear prone to characteristic sudden failure under demanding fog-of-war conditions. Critical dysfunctions span a spectrum of phase transition analogues, ranging from a ground state of 'all targets are enemies' to more standard data-rate instabilities. Insidious pathologies also appear possible, akin to inattentional blindness consequent on overfocus on an expected pattern. Via no-free-lunch constraints, different equivalence classes of systems, having structure and function determined by 'market pressures', in a large sense, will be inherently unreliable under different but characteristic canonical stress landscapes, suggesting that deliberate induction of failure may often be relatively straightforward. Focusing on two recent military case histories, these results provide a caveat emptor against blind faith in the current path-dependent evolutionary trajectory of automation for critical real-time processes.

  7. Fission product release modelling for application of fuel-failure monitoring and detection - An overview

    Lewis, B.J., E-mail: lewibre@gmail.com [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario, K7K 7B4 (Canada); Chan, P.K.; El-Jaby, A. [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario, K7K 7B4 (Canada); Iglesias, F.C.; Fitchett, A. [Candesco Division of Kinectrics Inc., 26 Wellington Street East, 3rd Floor, Toronto, Ontario M5E 1S2 (Canada)

    2017-06-15

    A review of fission product release theory is presented in support of fuel-failure monitoring analysis for the characterization and location of defective fuel. This work is used to describe: (i) the development of the steady-state Visual-DETECT code for coolant activity analysis to characterize failures in the core and the amount of tramp uranium; (ii) a generalization of this model in the STAR code for prediction of the time-dependent release of iodine and noble gas fission products to the coolant during reactor start-up, steady-state, shutdown, and bundle-shifting manoeuvres; (iii) an extension of the model to account for the release of fission products that are delayed-neutron precursors for assessment of fuel-failure location; and (iv) a simplification of the steady-state model to assess the methodology proposed by WANO for a fuel reliability indicator for water-cooled reactors.

  8. Modeling dynamic effects of promotion on interpurchase times

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractIn this paper we put forward a duration model to analyze the dynamic effects of marketing-mix variables on interpurchase times. We extend the accelerated failure-time model with an autoregressive structure. An important feature of our model is that it allows for different long-run and

  9. Introduction to Time Series Modeling

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  10. Competing approaches to analysis of failure times with competing risks.

    Farley, T M; Ali, M M; Slaymaker, E

    2001-12-15

    For the analysis of time to event data in contraceptive studies when individuals are subject to competing causes for discontinuation, some authors have recently advocated the use of the cumulative incidence rate as a more appropriate measure to summarize data than the complement of the Kaplan-Meier estimate of discontinuation. The former method estimates the rate of discontinuation in the presence of competing causes, while the latter is a hypothetical rate that would be observed if discontinuations for the other reasons could not occur. The difference between the two methods of analysis is the continuous time equivalent of a debate that took place in the contraceptive literature in the 1960s, when several authors advocated the use of net (adjusted or single decrement life table rates) rates in preference to crude rates (multiple decrement life table rates). A small simulation study illustrates the interpretation of the two types of estimate - the complement of the Kaplan-Meier estimate corresponds to a hypothetical rate where discontinuations for other reasons did not occur, while the cumulative incidence gives systematically lower estimates. The Kaplan-Meier estimates are more appropriate when estimating the effectiveness of a contraceptive method, but the cumulative incidence estimates are more appropriate when making programmatic decisions regarding contraceptive methods. Other areas of application, such as cancer studies, may prefer to use the cumulative incidence estimates, but their use should be determined according to the application. Copyright 2001 John Wiley & Sons, Ltd.

  11. Upgrade of Common Cause Failure Modelling of NPP Krsko PSA

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2006-01-01

    Over the last thirty years the probabilistic safety assessments (PSA) have been increasingly applied in technical engineering practice. Various failure modes of system of concern are mathematically and explicitly modelled by means of fault tree structure. Statistical independence of basic events from which the fault tree is built is not acceptable for an event category referred to as common cause failures (CCF). Based on overview of current international status of modelling of common cause failures in PSA several steps were made related to primary technical basis for methodology and data used for CCF model upgrade project in NPP Krsko (NEK) PSA. As a primary technical basis for methodological aspects of CCF modelling in Krsko PSA the following documents were considered: NUREG/CR-5485, NUREG/CR-4780, and Westinghouse Owners Group documents (WOG) WCAP-15674 and WCAP-15167. Use of these documents is supported by the most relevant guidelines and standards in the field, such as ASME PRA Standard and NRC Regulatory Guide 1.200. WCAP documents are in compliance with NUREG/CR-5485 and NUREG/CR-4780. Additionally, they provide WOG perspective on CCF modelling, which is important to consider since NEK follows WOG practice in resolving many generic and regulatory issues. It is, therefore, desirable that NEK CCF methodology and modelling is in general accordance with recommended WOG approaches. As a primary basis for CCF data needed to estimate CCF model parameters and their uncertainty, the main used documents were: NUREG/CR-5497, NUREG/CR-6268, WCAP-15167, and WCAP-16187. Use of NUREG/CR-5497 and NUREG/CR-6268 as a source of data for CCF parameter estimating is supported by the most relevant industry and regulatory PSA guides and standards currently existing in the field, including WOG. However, the WCAP document WCAP-16187 has provided a basis for CCF parameter values specific to Westinghouse PWR plants. Many of events from NRC / INEEL database were re-classified in WCAP

  12. Evaluation of nuclear power plant component failure probability and core damage probability using simplified PSA model

    Shimada, Yoshio

    2000-01-01

    It is anticipated that the change of frequency of surveillance tests, preventive maintenance or parts replacement of safety related components may cause the change of component failure probability and result in the change of core damage probability. It is also anticipated that the change is different depending on the initiating event frequency or the component types. This study assessed the change of core damage probability using simplified PSA model capable of calculating core damage probability in a short time period, which is developed by the US NRC to process accident sequence precursors, when various component's failure probability is changed between 0 and 1, or Japanese or American initiating event frequency data are used. As a result of the analysis, (1) It was clarified that frequency of surveillance test, preventive maintenance or parts replacement of motor driven pumps (high pressure injection pumps, residual heat removal pumps, auxiliary feedwater pumps) should be carefully changed, since the core damage probability's change is large, when the base failure probability changes toward increasing direction. (2) Core damage probability change is insensitive to surveillance test frequency change, since the core damage probability change is small, when motor operated valves and turbine driven auxiliary feed water pump failure probability changes around one figure. (3) Core damage probability change is small, when Japanese failure probability data are applied to emergency diesel generator, even if failure probability changes one figure from the base value. On the other hand, when American failure probability data is applied, core damage probability increase is large, even if failure probability changes toward increasing direction. Therefore, when Japanese failure probability data is applied, core damage probability change is insensitive to surveillance tests frequency change etc. (author)

  13. Application of nonhomogeneous Poisson process to reliability analysis of repairable systems of a nuclear power plant with rates of occurrence of failures time-dependent

    Saldanha, Pedro L.C.; Simone, Elaine A. de; Melo, Paulo Fernando F.F. e

    1996-01-01

    Aging is used to mean the continuous process which physical characteristics of a system, a structure or an equipment changes with time or use. Their effects are increases in failure probabilities of a system, a structure or an equipment, and their are calculated using time-dependent failure rate models. The purpose of this paper is to present an application of the nonhomogeneous Poisson process as a model to study rates of occurrence of failures when they are time-dependent. To this application, an analysis of reliability of service water pumps of a typical nuclear power plant is made, as long as the pumps are effectively repaired components. (author)

  14. Murine Models of Heart Failure With Preserved Ejection Fraction

    Maria Valero-Muñoz, PhD

    2017-12-01

    Full Text Available Heart failure with preserved ejection fraction (HFpEF is characterized by signs and symptoms of heart failure in the presence of a normal left ventricular ejection fraction. Despite accounting for up to 50% of all clinical presentations of heart failure, the mechanisms implicated in HFpEF are poorly understood, thus precluding effective therapy. The pathophysiological heterogeneity in the HFpEF phenotype also contributes to this disease and likely to the absence of evidence-based therapies. Limited access to human samples and imperfect animal models that completely recapitulate the human HFpEF phenotype have impeded our understanding of the mechanistic underpinnings that exist in this disease. Aging and comorbidities such as atrial fibrillation, hypertension, diabetes and obesity, pulmonary hypertension, and renal dysfunction are highly associated with HFpEF, yet the relationship and contribution between them remains ill-defined. This review discusses some of the distinctive clinical features of HFpEF in association with these comorbidities and highlights the advantages and disadvantage of commonly used murine models used to study the HFpEF phenotype.

  15. Cardioprotective Effect of Resveratrol in a Postinfarction Heart Failure Model

    Adam Riba

    2017-01-01

    Full Text Available Despite great advances in therapies observed during the last decades, heart failure (HF remained a major health problem in western countries. In order to further improve symptoms and survival in patients with heart failure, novel therapeutic strategies are needed. In some animal models of HF resveratrol (RES, it was able to prevent cardiac hypertrophy, contractile dysfunction, and remodeling. Several molecular mechanisms are thought to be involved in its protective effects, such as inhibition of prohypertrophic signaling molecules, improvement of myocardial Ca2+ handling, regulation of autophagy, and the reduction of oxidative stress and inflammation. In our present study, we wished to further examine the effects of RES on prosurvival (Akt-1, GSK-3β and stress signaling (p38-MAPK, ERK 1/2, and MKP-1 pathways, on oxidative stress (iNOS, COX-2 activity, and ROS formation, and ultimately on left ventricular function, hypertrophy and fibrosis in a murine, and isoproterenol- (ISO- induced postinfarction heart failure model. RES treatment improved left ventricle function, decreased interstitial fibrosis, cardiac hypertrophy, and the level of plasma BNP induced by ISO treatment. ISO also increased the activation of P38-MAPK, ERK1/2Thr183-Tyr185, COX-2, iNOS, and ROS formation and decreased the phosphorylation of Akt-1, GSK-3β, and MKP-1, which were favorably influenced by RES. According to our results, regulation of these pathways may also contribute to the beneficial effects of RES in HF.

  16. FEM simulation of TBC failure in a model system

    Seiler, P; Baeker, M; Roesier, J [Institut fuer Werkstoffe (IfW), Technische Universitaet Braunschweig (Germany); Beck, T; Schweda, M, E-mail: p.seiler@tu-bs.d [Institut fuer Energieforschung/ Werkstoffstruktur und -Eigenschaften (IEF 2), Forschungszentrum Juelich (Germany)

    2010-07-01

    In order to study the behavior of the complex failure mechanisms in thermal barrier coatings on turbine blades, a simplified model system is used to reduce the number of system parameters. The artificial system consists of a bond-coat material (fast creeping Fecralloy or slow creeping MA956) as the substrate with a Y{sub 2}O{sub 3} partially stabilized plasma sprayed zircon oxide TBC on top and a TGO between the two layers. A 2-dimensional FEM simulation was developed to calculate the growth stress inside the simplified coating system. The simulation permits the study of failure mechanisms by identifying compression and tension areas which are established by the growth of the oxide layer. This provides an insight into the possible crack paths in the coating and it allows to draw conclusions for optimizing real thermal barrier coating systems.

  17. A New Material Constitutive Model for Predicting Cladding Failure

    Rashid, Joe; Dunham, Robert [ANATECH Corp., San Diego, CA (United States); Rashid, Mark [University of California Davis, Davis, CA (United States); Machiels, Albert [EPRI, Palo Alto, CA (United States)

    2009-06-15

    An important issue in fuel performance and safety evaluations is the characterization of the effects of hydrides on cladding mechanical response and failure behavior. The hydride structure formed during power operation transforms the cladding into a complex multi-material composite, with through-thickness concentration profile that causes cladding ductility to vary by more than an order of magnitude between ID and OD. However, current practice of mechanical property testing treats the cladding as a homogeneous material characterized by a single stress-strain curve, regardless of its hydride morphology. Consequently, as irradiation conditions and hydrides evolution change, new material property testing is required, which results in a state of continuous need for valid material property data. A recently developed constitutive model, treats the cladding as a multi-material composite in which the metal and the hydride platelets are treated as separate material phases with their own elastic-plastic and fracture properties and interacting at their interfaces with appropriate constraint conditions between them to ensure strain and stress compatibility. An essential feature of the model is a multi-phase damage formulation that models the complex interaction between the hydride phases and the metal matrix and the coupled effect of radial and circumferential hydrides on cladding stress-strain response. This gives the model the capability of directly predicting cladding failure progression during the loading event and, as such, provides a unique tool for constructing failure criteria analytically where none could be developed by conventional material testing. Implementation of the model in a fuel behavior code provides the capability to predict in-reactor operational failures due to PCI or missing pellet surfaces (MPS) without having to rely on failure criteria. Even, a stronger motivation for use of the model is in the transportation accidents analysis of spent fuel

  18. Predictive modelling of fatigue failure in concentrated lubricated contacts.

    Evans, H P; Snidle, R W; Sharif, K J; Bryant, M J

    2012-01-01

    Reducing frictional losses in response to the energy agenda will require use of less viscous lubricants causing hydrodynamically-lubricated bearings to operate with thinner films leading to "mixed lubrication" conditions in which a degree of direct interaction occurs between surfaces protected only by boundary tribofilms. The paper considers the consequences of thinner films and mixed lubrication for concentrated contacts such as those occurring between the teeth of power transmission gears and in rolling element bearings. Surface fatigue in gears remains a serious problem in demanding applications, and its solution will become more pressing with the tendency towards thinner oils. The particular form of failure examined here is micropitting, which is identified as a fatigue phenomenon occurring at the scale of the surface roughness asperities. It has emerged recently as a systemic difficulty in the operation of large scale wind turbines where it occurs in both power transmission gears and their support bearings. Predictive physical modelling of these contacts requires a transient mixed lubrication analysis for conditions in which the predicted lubricant film thickness is of the same order or significantly less than the height of surface roughness features. Numerical solvers have therefore been developed which are able to deal with situations in which transient solid contacts occur between surface asperity features under realistic engineering conditions. Results of the analysis, which reveal the detailed time-varying behaviour of pressure and film clearance, have been used to predict fatigue and damage accumulation at the scale of surface asperity features with the aim of improving understanding of the micropitting phenomenon. The possible consequences on fatigue of residual stress fields resulting from plastic deformation of surface asperities is also considered.

  19. Mixed Hitting-Time Models

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  20. Evaluation of containment failure and cleanup time for Pu shots on the Z machine.

    Darby, John L.

    2010-02-01

    Between November 30 and December 11, 2009 an evaluation was performed of the probability of containment failure and the time for cleanup of contamination of the Z machine given failure, for plutonium (Pu) experiments on the Z machine at Sandia National Laboratories (SNL). Due to the unique nature of the problem, there is little quantitative information available for the likelihood of failure of containment components or for the time to cleanup. Information for the evaluation was obtained from Subject Matter Experts (SMEs) at the Z machine facility. The SMEs provided the State of Knowledge (SOK) for the evaluation. There is significant epistemic- or state of knowledge- uncertainty associated with the events that comprise both failure of containment and cleanup. To capture epistemic uncertainty and to allow the SMEs to reason at the fidelity of the SOK, we used the belief/plausibility measure of uncertainty for this evaluation. We quantified two variables: the probability that the Pu containment system fails given a shot on the Z machine, and the time to cleanup Pu contamination in the Z machine given failure of containment. We identified dominant contributors for both the time to cleanup and the probability of containment failure. These results will be used by SNL management to decide the course of action for conducting the Pu experiments on the Z machine.

  1. A model for predicting pellet-cladding interaction induced fuel rod failure, based on nonlinear fracture mechanics

    Jernkvist, L.O.

    1993-01-01

    A model for predicting pellet-cladding mechanical interaction induced fuel rod failure, suitable for implementation in finite element fuel-performance codes, is presented. Cladding failure is predicted by explicitly modelling the propagation of radial cracks under varying load conditions. Propagation is assumed to be due to either iodine induced stress corrosion cracking or ductile fracture. Nonlinear fracture mechanics concepts are utilized in modelling these two mechanisms of crack growth. The novelty of this approach is that the development of cracks, which may ultimately lead to fuel rod failure, can be treated as a dynamic and time-dependent process. The influence of cyclic loading, ramp rates and material creep on the failure mechanism can thereby be investigated. Results of numerical calculations, in which the failure model has been used to study the dependence of cladding creep rate on crack propagation velocity, are presented. (author)

  2. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T

    2016-01-01

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  3. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T [The Cleveland Clinic Foundation, Cleveland, OH (United States)

    2016-06-15

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  4. Control of a maintenance system when failure and repair times have phase type distributions

    Decassiamenesesrodrigues, Rita

    1990-09-01

    In the model of machine repair discussed there are M + R identical machines, M operating, and R spares. All machines are independent of one another. When an operating machine fails, it is sent to a single server repair station and immediately replaced by a spare machine, if one is available. The server has two available service types to choose from. There are waiting costs, repair costs, lost production costs, and switch-over costs. The following decision problem is treated: to obtain a stationary policy which determines the service type as a function of the state of the system in order to minimize the long-run average cost when failure and repair times have second-order Coxian distribution. This control problem is represented by a semi-Markov decision process. The policy-iteration algorithm and the value-iteration algorithm are used to obtain the optimal policy. Numerical results are given for these two optimization methods.

  5. Factors Influencing the Predictive Power of Models for Predicting Mortality and/or Heart Failure Hospitalization in Patients With Heart Failure

    Ouwerkerk, Wouter; Voors, Adriaan A.; Zwinderman, Aeilko H.

    2014-01-01

    The present paper systematically reviews and compares existing prediction models in order to establish the strongest variables, models, and model characteristics in patients with heart failure predicting outcome. To improve decision making accurately predicting mortality and heart-failure

  6. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  7. Modeling freedom from progression for standard-risk medulloblastoma: a mathematical tumor control model with multiple modes of failure

    Brodin, Nils Patrik; Vogelius, Ivan R.; Bjørk-Eriksson, Thomas

    2013-01-01

    As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published...

  8. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  9. Modeling cascading failures in interdependent infrastructures under terrorist attacks

    Wu, Baichao; Tang, Aiping; Wu, Jie

    2016-01-01

    An attack strength degradation model has been introduced to further capture the interdependencies among infrastructures and model cascading failures across infrastructures when terrorist attacks occur. A medium-sized energy system including oil network and power network is selected for exploring the vulnerabilities from independent networks to interdependent networks, considering the structural vulnerability and the functional vulnerability. Two types of interdependencies among critical infrastructures are involved in this paper: physical interdependencies and geographical interdependencies, shown by tunable parameters based on the probabilities of failures of nodes in the networks. In this paper, a tolerance parameter α is used to evaluation of the overloads of the substations based on power flow redistribution in power transmission systems under the attack. The results of simulation show that the independent networks or interdependent networks will be collapsed when only a small fraction of nodes are attacked under the attack strength degradation model, especially for the interdependent networks. The methodology introduced in this paper with physical interdependencies and geographical interdependencies involved in can be applied to analyze the vulnerability of the interdependent infrastructures further, and provides the insights of vulnerability of interdependent infrastructures to mitigation actions for critical infrastructure protections. - Highlights: • An attack strength degradation model based on the specified locations has been introduced. • Interdependencies considering both physical and geographical have been analyzed. • The structural vulnerability and the functional vulnerability have been considered.

  10. ISSUES ASSOCIATED WITH PROBABILISTIC FAILURE MODELING OF DIGITAL SYSTEMS

    CHU, T.L.; MARTINEZ-GURIDI, G.; LIHNER, J.; OVERLAND, D.

    2004-01-01

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process of instrumentation and control (I and C) systems is based on deterministic requirements, e.g., single failure criteria, and defense in depth and diversity. Probabilistic considerations can be used as supplements to the deterministic process. The National Research Council has recommended development of methods for estimating failure probabilities of digital systems, including commercial off-the-shelf (COTS) equipment, for use in probabilistic risk assessment (PRA). NRC staff has developed informal qualitative and quantitative requirements for PRA modeling of digital systems. Brookhaven National Laboratory (BNL) has performed a review of the-state-of-the-art of the methods and tools that can potentially be used to model digital systems. The objectives of this paper are to summarize the review, discuss the issues associated with probabilistic modeling of digital systems, and identify potential areas of research that would enhance the state of the art toward a satisfactory modeling method that could be integrated with a typical probabilistic risk assessment

  11. Utility of the Seattle Heart Failure Model in patients with advanced heart failure.

    Kalogeropoulos, Andreas P; Georgiopoulou, Vasiliki V; Giamouzis, Grigorios; Smith, Andrew L; Agha, Syed A; Waheed, Sana; Laskar, Sonjoy; Puskas, John; Dunbar, Sandra; Vega, David; Levy, Wayne C; Butler, Javed

    2009-01-27

    The aim of this study was to validate the Seattle Heart Failure Model (SHFM) in patients with advanced heart failure (HF). The SHFM was developed primarily from clinical trial databases and extrapolated the benefit of interventions from published data. We evaluated the discrimination and calibration of SHFM in 445 advanced HF patients (age 52 +/- 12 years, 68.5% male, 52.4% white, ejection fraction 18 +/- 8%) referred for cardiac transplantation. The primary end point was death (n = 92), urgent transplantation (n = 14), or left ventricular assist device (LVAD) implantation (n = 3); a secondary analysis was performed on mortality alone. Patients were receiving optimal therapy (angiotensin-II modulation 92.8%, beta-blockers 91.5%, aldosterone antagonists 46.3%), and 71.0% had an implantable device (defibrillator 30.4%, biventricular pacemaker 3.4%, combined 37.3%). During a median follow-up of 21 months, 109 patients (24.5%) had an event. Although discrimination was adequate (c-statistic >0.7), the SHFM overall underestimated absolute risk (observed vs. predicted event rate: 11.0% vs. 9.2%, 21.0% vs. 16.6%, and 27.9% vs. 22.8% at 1, 2, and 3 years, respectively). Risk underprediction was more prominent in patients with an implantable device. The SHFM had different calibration properties in white versus black patients, leading to net underestimation of absolute risk in blacks. Race-specific recalibration improved the accuracy of predictions. When analysis was restricted to mortality, the SHFM exhibited better performance. In patients with advanced HF, the SHFM offers adequate discrimination, but absolute risk is underestimated, especially in blacks and in patients with devices. This is more prominent when including transplantation and LVAD implantation as an end point.

  12. Stellite failure on a P91 HP valve - failure investigation and modelling of residual stresses

    Thorborg, Jesper; Hald, John; Hattel, Jesper Henri

    2006-01-01

    , and this provides a tool for process modification in order to minimize the risk for future breakdown. Both classical time independent and time dependent plasticity models have been used to describe the different material behaviours during the different process steps. The description of the materials is highly...

  13. Failure and reliability prediction by support vector machines regression of time series data

    Chagas Moura, Marcio das; Zio, Enrico; Lins, Isis Didier; Droguett, Enrique

    2011-01-01

    Support Vector Machines (SVMs) are kernel-based learning methods, which have been successfully adopted for regression problems. However, their use in reliability applications has not been widely explored. In this paper, a comparative analysis is presented in order to evaluate the SVM effectiveness in forecasting time-to-failure and reliability of engineered components based on time series data. The performance on literature case studies of SVM regression is measured against other advanced learning methods such as the Radial Basis Function, the traditional MultiLayer Perceptron model, Box-Jenkins autoregressive-integrated-moving average and the Infinite Impulse Response Locally Recurrent Neural Networks. The comparison shows that in the analyzed cases, SVM outperforms or is comparable to other techniques. - Highlights: → Realistic modeling of reliability demands complex mathematical formulations. → SVM is proper when the relation input/output is unknown or very costly to be obtained. → Results indicate the potential of SVM for reliability time series prediction. → Reliability estimates support the establishment of adequate maintenance strategies.

  14. Model-based failure detection for cylindrical shells from noisy vibration measurements.

    Candy, J V; Fisher, K A; Guidry, B L; Chambers, D H

    2014-12-01

    Model-based processing is a theoretically sound methodology to address difficult objectives in complex physical problems involving multi-channel sensor measurement systems. It involves the incorporation of analytical models of both physical phenomenology (complex vibrating structures, noisy operating environment, etc.) and the measurement processes (sensor networks and including noise) into the processor to extract the desired information. In this paper, a model-based methodology is developed to accomplish the task of online failure monitoring of a vibrating cylindrical shell externally excited by controlled excitations. A model-based processor is formulated to monitor system performance and detect potential failure conditions. The objective of this paper is to develop a real-time, model-based monitoring scheme for online diagnostics in a representative structural vibrational system based on controlled experimental data.

  15. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  16. Failure Behavior and Constitutive Model of Weakly Consolidated Soft Rock

    Wei-ming Wang

    2013-01-01

    Full Text Available Mining areas in western China are mainly located in soft rock strata with poor bearing capacity. In order to make the deformation failure mechanism and strength behavior of weakly consolidated soft mudstone and coal rock hosted in Ili No. 4 mine of Xinjiang area clear, some uniaxial and triaxial compression tests were carried out according to the samples of rocks gathered in the studied area, respectively. Meanwhile, a damage constitutive model which considered the initial damage was established by introducing a damage variable and a correction coefficient. A linearization process method was introduced according to the characteristics of the fitting curve and experimental data. The results showed that samples under different moisture contents and confining pressures presented completely different failure mechanism. The given model could accurately describe the elastic and plastic yield characteristics as well as the strain softening behavior of collected samples at postpeak stage. Moreover, the model could precisely reflect the relationship between the elastic modulus and confining pressure at prepeak stage.

  17. Mode I Failure of Armor Ceramics: Experiments and Modeling

    Meredith, Christopher; Leavy, Brian

    2017-06-01

    The pre-notched edge on impact (EOI) experiment is a technique for benchmarking the damage and fracture of ceramics subjected to projectile impact. A cylindrical projectile impacts the edge of a thin rectangular plate with a pre-notch on the opposite edge. Tension is generated at the notch tip resulting in the initiation and propagation of a mode I crack back toward the impact edge. The crack can be quantitatively measured using an optical method called Digital Gradient Sensing, which measures the crack-tip deformation by simultaneously quantifying two orthogonal surface slopes via measuring small deflections of light rays from a specularly reflective surface around the crack. The deflections in ceramics are small so the high speed camera needs to have a very high pixel count. This work reports on the results from pre-crack EOI experiments of SiC and B4 C plates. The experimental data are quantitatively compared to impact simulations using an advanced continuum damage model. The Kayenta ceramic model in Alegra will be used to compare fracture propagation speeds, bifurcations and inhomogeneous initiation of failure will be compared. This will provide insight into the driving mechanisms required for the macroscale failure modeling of ceramics.

  18. Discordance between 'actual' and 'scheduled' check-in times at a heart failure clinic.

    Eiran Z Gorodeski

    Full Text Available A 2015 Institute Of Medicine statement "Transforming Health Care Scheduling and Access: Getting to Now", has increased concerns regarding patient wait times. Although waiting times have been widely studied, little attention has been paid to the role of patient arrival times as a component of this phenomenon. To this end, we investigated patterns of patient arrival at scheduled ambulatory heart failure (HF clinic appointments and studied its predictors. We hypothesized that patients are more likely to arrive later than scheduled, with progressively later arrivals later in the day.Using a business intelligence database we identified 6,194 unique patients that visited the Cleveland Clinic Main Campus HF clinic between January, 2015 and January, 2017. This clinic served both as a tertiary referral center and a community HF clinic. Transplant and left ventricular assist device (LVAD visits were excluded. Punctuality was defined as the difference between 'actual' and 'scheduled' check-in times, whereby negative values (i.e., early punctuality were patients who checked-in early. Contrary to our hypothesis, we found that patients checked-in late only a minority of the time (38% of visits. Additionally, examining punctuality by appointment hour slot we found that patients scheduled after 8AM had progressively earlier check-in times as the day progressed (P < .001 for trend. In both a Random Forest-Regression framework and linear regression models the most important risk-adjusted predictors of early punctuality were: later in the day appointment hour slot, patient having previously been to the hospital, age in the early 70s, and white race.Patients attending a mixed population ambulatory HF clinic check-in earlier than scheduled times, with progressive discrepant intervals throughout the day. This finding may have significant implications for provider utilization and resource planning in order to maximize clinic efficiency. The impact of elective early

  19. Discordance between 'actual' and 'scheduled' check-in times at a heart failure clinic.

    Gorodeski, Eiran Z; Joyce, Emer; Gandesbery, Benjamin T; Blackstone, Eugene H; Taylor, David O; Tang, W H Wilson; Starling, Randall C; Hachamovitch, Rory

    2017-01-01

    A 2015 Institute Of Medicine statement "Transforming Health Care Scheduling and Access: Getting to Now", has increased concerns regarding patient wait times. Although waiting times have been widely studied, little attention has been paid to the role of patient arrival times as a component of this phenomenon. To this end, we investigated patterns of patient arrival at scheduled ambulatory heart failure (HF) clinic appointments and studied its predictors. We hypothesized that patients are more likely to arrive later than scheduled, with progressively later arrivals later in the day. Using a business intelligence database we identified 6,194 unique patients that visited the Cleveland Clinic Main Campus HF clinic between January, 2015 and January, 2017. This clinic served both as a tertiary referral center and a community HF clinic. Transplant and left ventricular assist device (LVAD) visits were excluded. Punctuality was defined as the difference between 'actual' and 'scheduled' check-in times, whereby negative values (i.e., early punctuality) were patients who checked-in early. Contrary to our hypothesis, we found that patients checked-in late only a minority of the time (38% of visits). Additionally, examining punctuality by appointment hour slot we found that patients scheduled after 8AM had progressively earlier check-in times as the day progressed (P < .001 for trend). In both a Random Forest-Regression framework and linear regression models the most important risk-adjusted predictors of early punctuality were: later in the day appointment hour slot, patient having previously been to the hospital, age in the early 70s, and white race. Patients attending a mixed population ambulatory HF clinic check-in earlier than scheduled times, with progressive discrepant intervals throughout the day. This finding may have significant implications for provider utilization and resource planning in order to maximize clinic efficiency. The impact of elective early arrival on

  20. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or 'wear out' failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values 1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes. We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1-2% can be obtained using typically 1000 simulations. The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η. Examples are given of the results obtained using three different models: (1) the Eyring-Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of

  1. Covariate measurement error correction methods in mediation analysis with failure time data.

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  2. Neurological Disorders in a Murine Model of Chronic Renal Failure

    Jean-Marc Chillon

    2014-01-01

    Full Text Available Cardiovascular disease is highly prevalent in patients with chronic renal failure (CRF. However, data on the impact of CRF on the cerebral circulatory system are scarce—despite the fact that stroke is the third most common cause of cardiovascular death in people with CRF. In the present study, we examined the impact of CRF on behavior (anxiety, recognition and ischemic stroke severity in a well-defined murine model of CRF. We did not observe any significant increases between CRF mice and non-CRF mice in terms of anxiety. In contrast, CRF mice showed lower levels of anxiety in some tests. Recognition was not impaired (vs. controls after 6 weeks of CRF but was impaired after 10 weeks of CRF. Chronic renal failure enhances the severity of ischemic stroke, as evaluated by the infarct volume size in CRF mice after 34 weeks of CRF. Furthermore, neurological test results in non-CRF mice tended to improve in the days following ischemic stroke, whereas the results in CRF mice tended to worsen. In conclusion, we showed that a murine model of CRF is suitable for evaluating uremic toxicity and the associated neurological disorders. Our data confirm the role of uremic toxicity in the genesis of neurological abnormalities (other than anxiety.

  3. [Discussion of Chinese syndrome typing in acute hepatic failure model].

    Zhang, Jin-liang; Zeng, Hui; Wang, Xian-bo

    2011-05-01

    To study Chinese syndrome typing of acute hepatic failure (AHF) mice model by screening effective formulae. Lipoplysaccharides (LPS)/D-galactosamine (D-GaIN) was intraperitoneally injected to mice to establish the AHF mice model. Yinchenhao Decoction, Huanglian Jiedu Decoction, Buzhong Yiqi Decoction, and Xijiao Dihuang Decoction were administered to model mice respectively by gastrogavage. The behavior and the survival rate were monitored. The liver function and pathological changes of liver tissues were detected. In all the tested classic recipes, the survival rate was elevated from 10% to 60% by administration of Xijiao Dihuang Decoction. Five h after modeling, the serum alanine aminotransferase (ALT) level was (183.95 +/- 52.00) U/L, and aspartate aminotransferase (AST) (235.70 +/- 34.03) U/L in Xijiao Di-huang Decoction Group, lower than those of the model control group, but with insignificant difference (ALT: 213.32 +/- 71.93 U/L; AST: 299.48 +/- 70.56 U/L, both P > 0.05). Xijiao Dihuang Decoction could obviously alleviate the liver injury. Xijiao Dihuang Decoction was an effective formula for LPS/D-GaIN induced AHF model. According to syndrome typing through formula effect, heat toxin and blood stasis syndrome dominated in the LPS/D-GalN induced AHF mice model.

  4. Predictors of incident heart failure in patients after an acute coronary syndrome: The LIPID heart failure risk-prediction model.

    Driscoll, Andrea; Barnes, Elizabeth H; Blankenberg, Stefan; Colquhoun, David M; Hunt, David; Nestel, Paul J; Stewart, Ralph A; West, Malcolm J; White, Harvey D; Simes, John; Tonkin, Andrew

    2017-12-01

    Coronary heart disease is a major cause of heart failure. Availability of risk-prediction models that include both clinical parameters and biomarkers is limited. We aimed to develop such a model for prediction of incident heart failure. A multivariable risk-factor model was developed for prediction of first occurrence of heart failure death or hospitalization. A simplified risk score was derived that enabled subjects to be grouped into categories of 5-year risk varying from 20%. Among 7101 patients from the LIPID study (84% male), with median age 61years (interquartile range 55-67years), 558 (8%) died or were hospitalized because of heart failure. Older age, history of claudication or diabetes mellitus, body mass index>30kg/m 2 , LDL-cholesterol >2.5mmol/L, heart rate>70 beats/min, white blood cell count, and the nature of the qualifying acute coronary syndrome (myocardial infarction or unstable angina) were associated with an increase in heart failure events. Coronary revascularization was associated with a lower event rate. Incident heart failure increased with higher concentrations of B-type natriuretic peptide >50ng/L, cystatin C>0.93nmol/L, D-dimer >273nmol/L, high-sensitivity C-reactive protein >4.8nmol/L, and sensitive troponin I>0.018μg/L. Addition of biomarkers to the clinical risk model improved the model's C statistic from 0.73 to 0.77. The net reclassification improvement incorporating biomarkers into the clinical model using categories of 5-year risk was 23%. Adding a multibiomarker panel to conventional parameters markedly improved discrimination and risk classification for future heart failure events. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  5. Dynamic failure of dry and fully saturated limestone samples based on incubation time concept

    Yuri V. Petrov

    2017-02-01

    Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.

  6. A Bayesian network approach for modeling local failure in lung cancer

    Oh, Jung Hun; Craft, Jeffrey; Al Lozi, Rawan; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O; Bradley, Jeffrey D; El Naqa, Issam

    2011-01-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  7. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  8. Proportional and scale change models to project failures of mechanical components with applications to space station

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  9. An advanced joint inversion system for CO2 storage modeling with large date sets for characterization and real-time monitoring-enhancing storage performance and reducing failure risks under uncertainties

    Kitanidis, Peter [Stanford Univ., CA (United States)

    2016-04-30

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic, tracer and thermal tests before CO2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO2 storage examples.

  10. Modelling of Attentional Dwell Time

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...... and the model has been extended to fit these data....

  11. Real time failure detection in unreinforced cementitious composites with triboluminescent sensor

    Olawale, David O.; Kliewer, Kaitlyn; Okoye, Annuli; Dickens, Tarik J.; Uddin, Mohammed J.; Okoli, Okenwa I.

    2014-01-01

    The in-situ triboluminescent optical fiber (ITOF) sensor has an integrated sensing and transmission component that converts the energy from damage events like impacts and crack propagation into optical signals that are indicative of the magnitude of damage in composite structures like concrete bridges. Utilizing the triboluminescence (TL) property of ZnS:Mn, the ITOF sensor has been successfully integrated into unreinforced cementitious composite beams to create multifunctional smart structures with in-situ failure detection capabilities. The fabricated beams were tested under flexural loading, and real time failure detection was made by monitoring the TL signals generated by the integrated ITOF sensor. Tested beam samples emitted distinctive TL signals at the instance of failure. In addition, we report herein a new and promising approach to damage characterization using TL emission profiles. Analysis of TL emission profiles indicates that the ITOF sensor responds to crack propagation through the beam even when not in contact with the crack. Scanning electron microscopy analysis indicated that fracto-triboluminescence was responsible for the TL signals observed at the instance of beam failure. -- Highlights: • Developed a new approach to triboluminescence (TL)-based sensing with ZnS:Mn. • Damage-induced excitation of ZnS:Mn enabled real time damage detection in composite. • Based on sensor position, correlation exists between TL signal and failure stress. • Introduced a new approach to damage characterization with TL profile analysis

  12. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  13. Dynamic computed tomography (CT) in the rat kidney and application to acute renal failure models

    Ishikawa, Isao; Saito, Tadashi; Ishii, Hirofumi; Bansho, Junichi; Koyama, Yukinori; Tobita, Akira

    1995-01-01

    Renal dynamic CT scanning is suitable for determining the excretion of contrast medium in the cortex and medulla of the kidney, which is valuable for understanding the pathogenesis of disease processes in various conditions. This form of scanning would be convenient for use, if a method of application to the rat kidney were available. Therefore, we developed a method of applying renal dynamic CT to rats and evaluated the cortical and medullary curves, e.g., the corticomedullary junction time which is correlated to creatinine clearance, in various rat models of acute renal failure. The rat was placed in a 10deg oblique position and a bilateral hilar slice was obtained before and 5, 10, 15, 20, 25, 30, 40, 50, 60, 80, 100, 120, 140, 160 and 180 sec after administering 0.5 ml of contrast medium using Somatom DR. The width of the slice was 4 mm and the scan time was 3 sec. The corticomedullary junction time in normal rats was 23.0±10.5 sec, the peak value of the cortical curve was 286.3±76.7 Hounsfield Unit (HU) and the peak value of the medullary curve was 390.1±66.2 HU. Corticomedullary junction time after exposure of the kidney was prolonged compared to that of the unexposed kidney. In rats with acute renal failure, the excretion pattern of contrast medium was similar in both the glycerol- and HgCl2-induced acute renal failure models. The peak values of the cortical curve were maintained three hours after a clamp was placed at the hilar region of the kidney for one hour, and the peak values of the medullary curve were maintained during the administration of 10μg/kg/min of angiotensin II. Dynamic CT curves in the acute renal failure models examined were slightly different from those in human acute renal failure. These results suggest that rats do not provide an ideal model for human acute renal failure. However, the application of dynamic CT to the rat kidney models was valuable for estimating the pathogenesis of various human kidney diseases. (author)

  14. [Predictors factors for the extubation failure in two or more times among preterm newborn].

    Tapia-Rombo, Carlos Antonio; De León-Gómez, Noé; Ballesteros-Del-Olmo, Julio César; Ruelas-Vargas, Consuelo; Cuevas-Urióstegui, María Luisa; Castillo-Pérez, José Juan

    2010-01-01

    With the ventilatory mechanical attendance has been prolonged the life of the preterm newborn (PTNB) critically sick and during that lapse many occasions it is necessary reintubation to PTNB in two or more times with the subsequent damage that makes enter to the patient to a vicious circle with more damage during the same reintubated. The objective of this study was to determine the factors that predict the extubation failure among PTNB from 28 to 36 weeks of gestational age in two or more times. It was considered extubation failure when in the first 72 hours of being had extubated the patient; there was reintubation necessity, independent of the cause that originated it. For the second extubation or more took the same approach. During the period of September to December of the 2004 were included in retrospective study to all PTNB that were interned in one hospital of third level that fulfilled the inclusion approaches (one study published where we take account the first extubation failure) and in retrolective study to the patients of the same hospital of January to October of the 2006. They were formed two groups, group A of cases (who failed in extubation two or more times) and the B of controls (who failed in extubation for the first time). The descriptive statistic and the inferential through of Student t test or Mann-Whitney U or rank sum test Wilcoxon, in suitable case; Chi-square or Fisher's exact test was used. Odds ratio (OR) and multivariate analysis for to study predictors factors for the extubation failure was employed. Statistical significance was considered at p 2, OR 5.3, IC to 95% of 1.3-21.4 (P = 0.02). In the bronchoscopy study they were some anatomical alterations that they explained the extubation failure in the second time. We conclude that it is important to plan an extubation in the PTNB, when there has already been a previous failure, and to avoid the well-known predictors factors for extubation failure as much as possible in the extubation

  15. Cost-effectiveness analysis of timely dialysis referral after renal transplant failure in Spain

    Villa Guillermo

    2012-08-01

    Full Text Available Abstract Background A cost-effectiveness analysis of timely dialysis referral after renal transplant failure was undertaken from the perspective of the Public Administration. The current Spanish situation, where all the patients undergoing graft function loss are referred back to dialysis in a late manner, was compared to an ideal scenario where all the patients are timely referred. Methods A Markov model was developed in which six health states were defined: hemodialysis, peritoneal dialysis, kidney transplantation, late referral hemodialysis, late referral peritoneal dialysis and death. The model carried out a simulation of the progression of renal disease for a hypothetical cohort of 1,000 patients aged 40, who were observed in a lifetime temporal horizon of 45 years. In depth sensitivity analyses were performed in order to ensure the robustness of the results obtained. Results Considering a discount rate of 3 %, timely referral showed an incremental cost of 211 €, compared to late referral. This cost increase was however a consequence of the incremental survival observed. The incremental effectiveness was 0.0087 quality-adjusted life years (QALY. When comparing both scenarios, an incremental cost-effectiveness ratio of 24,390 €/QALY was obtained, meaning that timely dialysis referral might be an efficient alternative if a willingness-to-pay threshold of 45,000 €/QALY is considered. This result proved to be independent of the proportion of late referral patients observed. The acceptance probability of timely referral was 61.90 %, while late referral was acceptable in 38.10 % of the simulations. If we however restrict the analysis to those situations not involving any loss of effectiveness, the acceptance probability of timely referral was 70.10 %, increasing twofold that of late referral (29.90 %. Conclusions Timely dialysis referral after graft function loss might be an efficient alternative in Spain, improving both

  16. Cost-effectiveness analysis of timely dialysis referral after renal transplant failure in Spain.

    Villa, Guillermo; Sánchez-Álvarez, Emilio; Cuervo, Jesús; Fernández-Ortiz, Lucía; Rebollo, Pablo; Ortega, Francisco

    2012-08-16

    A cost-effectiveness analysis of timely dialysis referral after renal transplant failure was undertaken from the perspective of the Public Administration. The current Spanish situation, where all the patients undergoing graft function loss are referred back to dialysis in a late manner, was compared to an ideal scenario where all the patients are timely referred. A Markov model was developed in which six health states were defined: hemodialysis, peritoneal dialysis, kidney transplantation, late referral hemodialysis, late referral peritoneal dialysis and death. The model carried out a simulation of the progression of renal disease for a hypothetical cohort of 1,000 patients aged 40, who were observed in a lifetime temporal horizon of 45 years. In depth sensitivity analyses were performed in order to ensure the robustness of the results obtained. Considering a discount rate of 3 %, timely referral showed an incremental cost of 211 €, compared to late referral. This cost increase was however a consequence of the incremental survival observed. The incremental effectiveness was 0.0087 quality-adjusted life years (QALY). When comparing both scenarios, an incremental cost-effectiveness ratio of 24,390 €/QALY was obtained, meaning that timely dialysis referral might be an efficient alternative if a willingness-to-pay threshold of 45,000 €/QALY is considered. This result proved to be independent of the proportion of late referral patients observed. The acceptance probability of timely referral was 61.90 %, while late referral was acceptable in 38.10 % of the simulations. If we however restrict the analysis to those situations not involving any loss of effectiveness, the acceptance probability of timely referral was 70.10 %, increasing twofold that of late referral (29.90 %). Timely dialysis referral after graft function loss might be an efficient alternative in Spain, improving both patients' survival rates and health-related quality of life at an

  17. In the Dark Shadow of the Supercycle Tailings Failure Risk & Public Liability Reach All Time Highs

    Lindsay Newland Bowker

    2017-10-01

    Full Text Available This is the third in a series of independent research papers attempting to improve the quality of descriptive data and analysis of tailings facility failures globally focusing on the relative occurrence, severity and root causes of these failures. This paper updates previously published failures data through 2010 with both additional data pre-2010 and additional data 2010–2015. All three papers have explored the connection between high public consequence failure trends and mining economics trends especially grade, costs to produce and price. This work, the third paper, looks more deeply at that connection through several autopsies of the dysfunctional economics of the period 2000–2010 in which the greatest and longest price increase in recorded history co-occurred across all commodities, a phenomenon sometimes called a supercycle. That high severity failures reached all-time highs in the same decade as prices rose to highs, unprecedented since 1916, challenges many fundamental beliefs and assumptions that have governed modern mining operations, investment decisions, and regulation. It is from waste management in mining, a non-revenue producing cost incurring part of every operation, that virtually all severe environmental and community damages arise. These damages are now more frequently at a scale and of a nature that is non-remediable and beyond any possibility of clean up or reclamation. The authors have jointly undertaken this work in the public interest without funding from the mining industry, regulators, non-governmental organizations, or from any other source.

  18. Modeling combined tension-shear failure of ductile materials

    Partom, Y

    2014-01-01

    Failure of ductile materials is usually expressed in terms of effective plastic strain. Ductile materials can fail by two different failure modes, shear failure and tensile failure. Under dynamic loading shear failure has to do with shear localization and formation of adiabatic shear bands. In these bands plastic strain rate is very high, dissipative heating is extensive, and shear strength is lost. Shear localization starts at a certain value of effective plastic strain, when thermal softening overcomes strain hardening. Shear failure is therefore represented in terms of effective plastic strain. On the other hand, tensile failure comes about by void growth under tension. For voids in a tension field there is a threshold state of the remote field for which voids grow spontaneously (cavitation), and the material there fails. Cavitation depends on the remote field stress components and on the flow stress. In this way failure in tension is related to shear strength and to failure in shear. Here we first evaluate the cavitation threshold for different remote field situations, using 2D numerical simulations with a hydro code. We then use the results to compute examples of rate dependent tension-shear failure of a ductile material.

  19. Models for dependent time series

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  20. Expert Performance and Time Pressure: Implications for Automation Failures in Aviation

    2016-09-30

    settled by these two studies. To help resolve the disagreement between the previous research findings, the present work used a computerized chess...communication between the automation and the pilots should also be helpful , but it is doubtful that the system designer or the real-time automation can...Performance and Time Pressure: Implications for Automation Failures in Aviation 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  1. A Dynamic Approach to Modeling Dependence Between Human Failure Events

    Boring, Ronald Laurids [Idaho National Laboratory

    2015-09-01

    In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.

  2. Filter design for failure detection and isolation in the presence of modeling errors and disturbances

    Niemann, Hans Henrik; Stoustrup, Jakob

    1996-01-01

    The design problem of filters for robust failure detection and isolation, (FDI) is addressed in this paper. The failure detection problem will be considered with respect to both modeling errors and disturbances. Both an approach based on failure detection observers as well as an approach based...

  3. Filter Design for Failure Detection and Isolation in the Presence of Modeling Erros and Disturbances

    Stoustrup, Jakob; Niemann, Hans Henrik

    1996-01-01

    The design problem of filters for robust Failure Detectionand Isolation, (FDI) is addressed in this paper. The failure detectionproblem will be considered with respect to both modeling errors anddisturbances. Both an approach based on failure detection observes aswell as an approach based...

  4. Numerical investigations of rib fracture failure models in different dynamic loading conditions.

    Wang, Fang; Yang, Jikuang; Miller, Karol; Li, Guibing; Joldes, Grand R; Doyle, Barry; Wittek, Adam

    2016-01-01

    Rib fracture is one of the most common thoracic injuries in vehicle traffic accidents that can result in fatalities associated with seriously injured internal organs. A failure model is critical when modelling rib fracture to predict such injuries. Different rib failure models have been proposed in prediction of thorax injuries. However, the biofidelity of the fracture failure models when varying the loading conditions and the effects of a rib fracture failure model on prediction of thoracic injuries have been studied only to a limited extent. Therefore, this study aimed to investigate the effects of three rib failure models on prediction of thoracic injuries using a previously validated finite element model of the human thorax. The performance and biofidelity of each rib failure model were first evaluated by modelling rib responses to different loading conditions in two experimental configurations: (1) the three-point bending on the specimen taken from rib and (2) the anterior-posterior dynamic loading to an entire bony part of the rib. Furthermore, the simulation of the rib failure behaviour in the frontal impact to an entire thorax was conducted at varying velocities and the effects of the failure models were analysed with respect to the severity of rib cage damages. Simulation results demonstrated that the responses of the thorax model are similar to the general trends of the rib fracture responses reported in the experimental literature. However, they also indicated that the accuracy of the rib fracture prediction using a given failure model varies for different loading conditions.

  5. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development

    Shanee Honig

    2018-06-01

    Full Text Available While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI, and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI, human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP, that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1 communicating failures, (2 perception and comprehension of failures, and (3 solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a

  6. Average inactivity time model, associated orderings and reliability properties

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  7. Timing of pregnancy, postpartum risk of virologic failure and loss to follow-up among HIV-positive women.

    Onoya, Dorina; Sineke, Tembeka; Brennan, Alana T; Long, Lawrence; Fox, Matthew P

    2017-07-17

    We assessed the association between the timing of pregnancy with the risk of postpartum virologic failure and loss from HIV care in South Africa. This is a retrospective cohort study of 6306 HIV-positive women aged 15-49 at antiretroviral therapy (ART) initiation, initiated on ART between January 2004 and December 2013 in Johannesburg, South Africa. The incidence of virologic failure (two consecutive viral load measurements of >1000 copies/ml) and loss to follow-up (>3 months late for a visit) during 24 months postpartum were assessed using Cox proportional hazards modelling. The rate of postpartum virologic failure was higher following an incident pregnancy on ART [adjusted hazard ratio 1.8, 95% confidence interval (CI): 1.1-2.7] than among women who initiated ART during pregnancy. This difference was sustained among women with CD4 cell count less than 350 cells/μl at delivery (adjusted hazard ratio 1.8, 95% CI: 1.1-3.0). Predictors of postpartum virologic failure were being viremic, longer time on ART, being 25 or less years old and low CD4 cell count and anaemia at delivery, as well as initiating ART on stavudine-containing or abacavir-containing regimen. There was no difference postpartum loss to follow-up rates between the incident pregnancies group (hazard ratio 0.9, 95% CI: 0.7-1.1) and those who initiated ART in pregnancy. The risk of virologic failure remains high among postpartum women, particularly those who conceive on ART. The results highlight the need to provide adequate support for HIV-positive women with fertility intention after ART initiation and to strengthen monitoring and retention efforts for postpartum women to sustain the benefits of ART.

  8. Patient-Specific Tailored Intervention Improves INR Time in Therapeutic Range and INR Variability in Heart Failure Patients.

    Gotsman, Israel; Ezra, Orly; Hirsh Raccah, Bruria; Admon, Dan; Lotan, Chaim; Dekeyser Ganz, Freda

    2017-08-01

    Many patients with heart failure need anticoagulants, including warfarin. Good control is particularly challenging in heart failure patients, with range, thereby increasing the risk of complications. This study aimed to evaluate the effect of a patient-specific tailored intervention on anticoagulation control in patients with heart failure. Patients with heart failure taking warfarin therapy (n = 145) were randomized to either standard care or a 1-time intervention assessing potential risk factors for lability of INR, in which they received patient-specific instructions. Time in therapeutic range (TTR) using Rosendaal's linear model was assessed 3 months before and after the intervention. The patient-tailored intervention significantly increased anticoagulation control. The median TTR levels before intervention were suboptimal in the interventional and control groups (53% vs 45%, P = .14). After intervention the median TTR increased significantly in the interventional group compared with the control group (80% [interquartile range, 62%-93%] vs 44% [29%-61%], P <.0001). The intervention resulted in a significant improvement in the interventional group before versus after intervention (53% vs 80%, P <.0001) but not in the control group (45% vs 44%, P = .95). The percentage of patients with a TTR ≥60%, considered therapeutic, was substantially higher in the interventional group: 79% versus 25% (P <.0001). The INR variability (standard deviation of each patient's INR measurements) decreased significantly in the interventional group, from 0.53 to 0.32 (P <.0001) after intervention but not in the control group. Patient-specific tailored intervention significantly improves anticoagulation therapy in patients with heart failure. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Impedance models in time domain

    Rienstra, S.W.

    2005-01-01

    Necessary conditions for an impedance function are derived. Methods available in the literature are discussed. A format with recipe is proposed for an exact impedance condition in time domain on a time grid, based on the Helmholtz resonator model. An explicit solution is given of a pulse reflecting

  10. Stochastic models for time series

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  11. Effect of therapy switch on time to second-line antiretroviral treatment failure in HIV-infected patients.

    Häggblom, Amanda; Santacatterina, Michele; Neogi, Ujjwal; Gisslen, Magnus; Hejdeman, Bo; Flamholc, Leo; Sönnerborg, Anders

    2017-01-01

    Switch from first line antiretroviral therapy (ART) to second-line ART is common in clinical practice. However, there is limited knowledge of to which extent different reason for therapy switch are associated with differences in long-term consequences and sustainability of the second line ART. Data from 869 patients with 14601 clinical visits between 1999-2014 were derived from the national cohort database. Reason for therapy switch and viral load (VL) levels at first-line ART failure were compared with regard to outcome of second line ART. Using the Laplace regression model we analyzed the median, 10th, 20th, 30th and 40th percentile of time to viral failure (VF). Most patients (n = 495; 57.0%) switched from first-line to second-line ART without VF. Patients switching due to detectable VL with (n = 124; 14.2%) or without drug resistance mutations (DRM) (n = 250; 28.8%) experienced VF to their second line regimen sooner (median time, years: 3.43 (95% CI 2.90-3.96) and 3.20 (95% 2.65-3.75), respectively) compared with those who switched without VF (4.53 years). Furthermore level of VL at first-line ART failure had a significant impact on failure of second-line ART starting after 2.5 years of second-line ART. In the context of life-long therapy, a median time on second line ART of 4.53 years for these patients is short. To prolong time on second-line ART, further studies are needed on the reasons for therapy changes. Additionally patients with a high VL at first-line VF should be more frequently monitored the period after the therapy switch.

  12. Effect of therapy switch on time to second-line antiretroviral treatment failure in HIV-infected patients.

    Amanda Häggblom

    Full Text Available Switch from first line antiretroviral therapy (ART to second-line ART is common in clinical practice. However, there is limited knowledge of to which extent different reason for therapy switch are associated with differences in long-term consequences and sustainability of the second line ART.Data from 869 patients with 14601 clinical visits between 1999-2014 were derived from the national cohort database. Reason for therapy switch and viral load (VL levels at first-line ART failure were compared with regard to outcome of second line ART. Using the Laplace regression model we analyzed the median, 10th, 20th, 30th and 40th percentile of time to viral failure (VF.Most patients (n = 495; 57.0% switched from first-line to second-line ART without VF. Patients switching due to detectable VL with (n = 124; 14.2% or without drug resistance mutations (DRM (n = 250; 28.8% experienced VF to their second line regimen sooner (median time, years: 3.43 (95% CI 2.90-3.96 and 3.20 (95% 2.65-3.75, respectively compared with those who switched without VF (4.53 years. Furthermore level of VL at first-line ART failure had a significant impact on failure of second-line ART starting after 2.5 years of second-line ART.In the context of life-long therapy, a median time on second line ART of 4.53 years for these patients is short. To prolong time on second-line ART, further studies are needed on the reasons for therapy changes. Additionally patients with a high VL at first-line VF should be more frequently monitored the period after the therapy switch.

  13. A Retrospective Study of Success, Failure, and Time Needed to Perform Awake Intubation.

    Joseph, Thomas T; Gal, Jonathan S; DeMaria, Samuel; Lin, Hung-Mo; Levine, Adam I; Hyman, Jaime B

    2016-07-01

    Awake intubation is the standard of care for management of the anticipated difficult airway. The performance of awake intubation may be perceived as complex and time-consuming, potentially leading clinicians to avoid this technique of airway management. This retrospective review of awake intubations at a large academic medical center was performed to determine the average time taken to perform awake intubation, its effects on hemodynamics, and the incidence and characteristics of complications and failure. Anesthetic records from 2007 to 2014 were queried for the performance of an awake intubation. Of the 1,085 awake intubations included for analysis, 1,055 involved the use of a flexible bronchoscope. Each awake intubation case was propensity matched with two controls (1:2 ratio), with similar comorbidities and intubations performed after the induction of anesthesia (n = 2,170). The time from entry into the operating room until intubation was compared between groups. The anesthetic records of all patients undergoing awake intubation were also reviewed for failure and complications. The median time to intubation for patients intubated post induction was 16.0 min (interquartile range: 13 to 22) from entrance into the operating room. The median time to intubation for awake patients was 24.0 min (interquartile range: 19 to 31). The complication rate was 1.6% (17 of 1,085 cases). The most frequent complications observed were mucous plug, endotracheal tube cuff leak, and inadvertent extubation. The failure rate for attempted awake intubation was 1% (n = 10). Awake intubations have a high rate of success and low rate of serious complications and failure. Awake intubations can be performed safely and rapidly.

  14. Using Probablilistic Risk Assessment to Model Medication System Failures in Long-Term Care Facilities

    Comden, Sharon C; Marx, David; Murphy-Carley, Margaret; Hale, Misti

    2005-01-01

    .... Discussion: The models provide contextual maps of the errors and behaviors that lead to medication delivery system failures, including unanticipated risks associated with regulatory practices and common...

  15. Analysis of lower head failure with simplified models and a finite element code

    Koundy, V. [CEA-IPSN-DPEA-SEAC, Service d' Etudes des Accidents, Fontenay-aux-Roses (France); Nicolas, L. [CEA-DEN-DM2S-SEMT, Service d' Etudes Mecaniques et Thermiques, Gif-sur-Yvette (France); Combescure, A. [INSA-Lyon, Lab. Mecanique des Solides, Villeurbanne (France)

    2001-07-01

    The objective of the OLHF (OECD lower head failure) experiments is to characterize the timing, mode and size of lower head failure under high temperature loading and reactor coolant system pressure due to a postulated core melt scenario. Four tests have been performed at Sandia National Laboratories (USA), in the frame of an OECD project. The experimental results have been used to develop and validate predictive analysis models. Within the framework of this project, several finite element calculations were performed. In parallel, two simplified semi-analytical methods were developed in order to get a better understanding of the role of various parameters on the creep phenomenon, e.g. the behaviour of the lower head material and its geometrical characteristics on the timing, mode and location of failure. Three-dimensional modelling of crack opening and crack propagation has also been carried out using the finite element code Castem 2000. The aim of this paper is to present the two simplified semi-analytical approaches and to report the status of the 3D crack propagation calculations. (authors)

  16. Evaluation of mean time between forced outage for reactor protection system using RBD and failure rate

    Lee, D. Y.; Park, J. H.; Hwang, I. K.; Cha, K. H.; Choi, J. K.; Lee, K. Y.; Park, J. K.

    2001-01-01

    The design life of nuclear power plants (NPPs) under recent construction is about fifty to sixty years. However, the duration that equipments of control systems operate without failures is at most five to ten years. Design for diversity and adequate maintenance strategy are required for NPP protection system in order to use the control equipment which has shorter life time than the design life of NPP. Fault Tree Analysis (FTA) technique, which has been applied to Probabilistics Safety Analysis (PSA), has been introduced to quantitatively evaluate the reliability of NPP I and C systems. The FTA, however, cannot properly consider the effect of maintenance. In this work, we have reviewed quantitative reliability evaluation techniques using the reliability block diagram and failure rates and applied it to the evaluation of mean time between forced outage for reactor protection system

  17. A zipper network model of the failure mechanics of extracellular matrices.

    Ritter, Michael C; Jesudason, Rajiv; Majumdar, Arnab; Stamenovic, Dimitrije; Buczek-Thomas, Jo Ann; Stone, Phillip J; Nugent, Matthew A; Suki, Béla

    2009-01-27

    Mechanical failure of soft tissues is characteristic of life-threatening diseases, including capillary stress failure, pulmonary emphysema, and vessel wall aneurysms. Failure occurs when mechanical forces are sufficiently high to rupture the enzymatically weakened extracellular matrix (ECM). Elastin, an important structural ECM protein, is known to stretch beyond 200% strain before failing. However, ECM constructs and native vessel walls composed primarily of elastin and proteoglycans (PGs) have been found to fail at much lower strains. In this study, we hypothesized that PGs significantly contribute to tissue failure. To test this, we developed a zipper network model (ZNM), in which springs representing elastin are organized into long wavy fibers in a zipper-like formation and placed within a network of springs mimicking PGs. Elastin and PG springs possessed distinct mechanical and failure properties. Simulations using the ZNM showed that the failure of PGs alone reduces the global failure strain of the ECM well below that of elastin, and hence, digestion of elastin does not influence the failure strain. Network analysis suggested that whereas PGs drive the failure process and define the failure strain, elastin determines the peak and failure stresses. Predictions of the ZNM were experimentally confirmed by measuring the failure properties of engineered elastin-rich ECM constructs before and after digestion with trypsin, which cleaves the core protein of PGs without affecting elastin. This study reveals a role for PGs in the failure properties of engineered and native ECM with implications for the design of engineered tissues.

  18. A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models

    Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana

    2014-01-01

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model

  19. Cardiac dysfunction in heart failure: the cardiologist's love affair with time.

    Brutsaert, Dirk L

    2006-01-01

    Translating research into clinical practice has been a challenge throughout medical history. From the present review, it should be clear that this is particularly the case for heart failure. As a consequence, public awareness of this disease has been disillusionedly low, despite its prognosis being worse than that of most cancers and many other chronic diseases. We explore how over the past 150 years since Ludwig and Marey concepts about the evaluation of cardiac performance in patients with heart failure have emerged. From this historical-physiologic perspective, we have seen how 3 increasingly reductionist approaches or schools of thought have evolved in parallel, that is, an input-output approach, a hemodynamic pump approach, and a muscular pump approach. Each one of these has provided complementary insights into the pathophysiology of heart failure and has resulted in measurements or derived indices, some of which still being in use in present-day cardiology. From the third, most reductionist muscular pump approach, we have learned that myocardial and ventricular relaxation properties as well as temporal and spatial nonuniformities have been largely overlooked in the 2 other, input-output and hemodynamic pump, approaches. A key message from the present review is that relaxation and nonuniformities can be fully understood only from within the time-space continuum of cardiac pumping. As cyclicity and rhythm are, in some way, the most basic aspects of cardiac function, considerations of time should dominate over any measurement of cardiac performance as a muscular pump. Any measurement that is blind for the arrow of cardiac time should therefore be interpreted with caution. We have seen how the escape from the time domain-as with the calculation of LV ejection fraction-fascinating though as it may be, has undoubtedly served to hinder a rational scientific debate on the recent, so-called systolic-diastolic heart failure controversy. Lacking appreciation of early

  20. Real-time instrument-failure detection in the LOFT pressurizer using functional redundancy

    Tylee, J.L.

    1982-07-01

    The functional redundancy approach to detecting instrument failures in a pressurized water reactor (PWR) pressurizer is described and evaluated. This real-time method uses a bank of Kalman filters (one for each instrument) to generate optimal estimates of the pressurizer state. By performing consistency checks between the output of each filter, failed instruments can be identified. Simulation results and actual pressurizer data are used to demonstrate the capabilities of the technique

  1. Prolonged warm ischemia time is associated with graft failure and mortality after kidney transplantation.

    Tennankore, Karthik K; Kim, S Joseph; Alwayn, Ian P J; Kiberd, Bryce A

    2016-03-01

    Warm ischemia time is a potentially modifiable insult to transplanted kidneys, but little is known about its effect on long-term outcomes. Here we conducted a study of United States kidney transplant recipients (years 2000-2013) to determine the association between warm ischemia time (the time from organ removal from cold storage to reperfusion with warm blood) and death/graft failure. Times under 10 minutes were potentially attributed to coding error. Therefore, the 10-to-under-20-minute interval was chosen as the reference group. The primary outcome was mortality and graft failure (return to chronic dialysis or preemptive retransplantation) adjusted for recipient, donor, immunologic, and surgical factors. The study included 131,677 patients with 35,901 events. Relative to the reference patients, times of 10 to under 20, 20 to under 30, 30 to under 40, 40 to under 50, 50 to under 60, and 60 and more minutes were associated with hazard ratios of 1.07 (95% confidence interval, 0.99-1.15), 1.13 (1.06-1.22), 1.17 (1.09-1.26), 1.20 (1.12-1.30), and 1.23 (1.15-1.33) for the composite event, respectively. Association between prolonged warm ischemia time and death/graft failure persisted after stratification by donor type (living vs. deceased donor) and delayed graft function status. Thus, warm ischemia time is associated with adverse long-term patient and graft survival after kidney transplantation. Identifying strategies to reduce warm ischemia time is an important consideration for future study. Copyright © 2015 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  2. Antithrombin III in animal models of sepsis and organ failure.

    Dickneite, G

    1998-01-01

    Antithrombin III (AT III) is the physiological inhibitor of thrombin and other serine proteases of the clotting cascade. In the development of sepsis, septic shock and organ failure, the plasma levels of AT III decrease considerably, suggesting the concept of a substitution therapy with the inhibitor. A decrease of AT III plasma levels might also be associated with other pathological disorders like trauma, burns, pancreatitis or preclampsia. Activation of coagulation and consumption of AT III is the consequence of a generalized inflammation called SIRS (systemic inflammatory response syndrome). The clotting cascade is also frequently activated after organ transplantation, especially if organs are grafted between different species (xenotransplantation). During the past years AT III has been investigated in numerous corresponding disease models in different animal species which will be reviewed here. The bulk of evidence suggests, that AT III substitution reduces morbidity and mortality in the diseased animals. While gaining more experience with AT III, the concept of substitution therapy to maximal baseline plasma levels (100%) appears to become insufficient. Evidence from clinical and preclinical studies now suggests to adjust the AT III plasma levels to about 200%, i.e., doubling the normal value. During the last few years several authors proposed that AT III might not only be an anti-thrombotic agent, but to have in addition an anti-inflammatory effect.

  3. Intuitionistic fuzzy-based model for failure detection.

    Aikhuele, Daniel O; Turan, Faiz B M

    2016-01-01

    In identifying to-be-improved product component(s), the customer/user requirements which are mainly considered, and achieved through customer surveys using the quality function deployment (QFD) tool, often fail to guarantee or cover aspects of the product reliability. Even when they do, there are always many misunderstandings. To improve the product reliability and quality during product redesigning phase and to create that novel product(s) for the customers, the failure information of the existing product, and its component(s) should ordinarily be analyzed and converted to appropriate design knowledge for the design engineer. In this paper, a new intuitionistic fuzzy multi-criteria decision-making method has been proposed. The new approach which is based on an intuitionistic fuzzy TOPSIS model uses an exponential-related function for the computation of the separation measures from the intuitionistic fuzzy positive ideal solution (IFPIS) and intuitionistic fuzzy negative ideal solution (IFNIS) of alternatives. The proposed method has been applied to two practical case studies, and the result from the different cases has been compared with some similar computational approaches in the literature.

  4. Probabilistic Survivability Versus Time Modeling

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  5. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  6. Time-to-failure analysis of 5 nm amorphous Ru(P) as a copper diffusion barrier

    Henderson, Lucas B.; Ekerdt, John G.

    2009-01-01

    Evaluation of chemical vapor deposited amorphous ruthenium-phosphorous alloy as a copper interconnect diffusion barrier is reported. Approximately 5 nm-thick Ru(P) and TaN films in Cu/Ru(P)/SiO 2 /p-Si and Cu/TaN/SiO 2 /p-Si stacks are subjected to bias-temperature stress at electric fields from 2.0 MV/cm to 4.0 MV/cm and temperatures from 200 deg. C to 300 deg. C . Time-to-failure measurements suggest that chemical vapor deposited Ru(P) is comparable to physical vapor deposited TaN in preventing Cu diffusion. The activation energy of failure for stacks using Ru(P) as a liner is determined to be 1.83 eV in the absence of an electric field. Multiple models of dielectric failure, including the E and Schottky-type √E models indicate that Ru(P) is acceptable for use as a diffusion barrier at conditions likely in future technology generations

  7. A quasi-static algorithm that includes effects of characteristic time scales for simulating failures in brittle materials

    Liu, Jinxing

    2013-04-24

    When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).

  8. [Surgical model of chronic renal failure: study in rabbits].

    Costa, Andrei Ferreira Nicolau da; Pereira, Lara de Paula Miranda; Ferreira, Manoel Luiz; Silva, Paulo Cesar; Chagar, Vera Lucia Antunes; Schanaider, Alberto

    2009-02-01

    To establish a model of chronic renal failure in rabbits, with perspectives of its use for therapeutic and repairing actions. Nineteen males, adults rabbits (New Zealand) randomly distributed into three groups were used: Group 1 - Control (n =5); Group 2-Sham (n =7); and Group 3 - Experimental (n =7). They were anaesthetized by using intramuscular Cetamine, Diazepam and Fentanyl followed by Sevorane with vaporizer device. In Group 3, a bipolar left nephrectomy was carried out and after four weeks, it was also done a right nephrectomy. All the samples of the renal tissue were weighed. The Group 2 was only submitted to both abdominal laparotomies, without nephrectomy. Biochemical evaluations, with urea, creatinina, sodium and potassium measurement; abdominal ultrasound scan; scintigraphy and histological analysis were performed in all animals. In group 3 there was a progressive increase of urea (p=0.0001), creatinine (p=0.0001), sodium (p = 0,0002) and potassium (p=0,0003). The comparison of these results with those one of the Groups 1 and 2, in all intervals, revealed blood rising with statistical significant level (p < 0,05). In Group 3, the ultrasound scan identified an increasing of the left kidney size, after 16 weeks and at the 4th week the scintigraphy confirmed the loss of 75% of the left renal mass. In the same group, the histological evaluation showed subcapsular and intersticial fibrosis and also tubular regeneration. The experimental model of IRC is feasible, with animal's survival in middle term which allows the use of this interval like a therapeutic window for testing different approaches in order to repair the kidney damages.

  9. Modeling Marrow Failure and MDS for Novel Therapeutics

    2017-03-01

    syndrome (MDS) and leukemia is also markedly elevated in patients with inherited marrow failure syndromes compared to age-matched controls. Prognosis of...Novel Therapeutics W81XWH-16-1-0054 1. Introduction Clonal evolution is a potentially life threatening long-term complication of inherited and...The risk of early progression to myelodysplastic syndrome (MDS) and leukemia is also markedly elevated in patients with inherited marrow failure

  10. Visibility graph analysis of heart rate time series and bio-marker of congestive heart failure

    Bhaduri, Anirban; Bhaduri, Susmita; Ghosh, Dipak

    2017-09-01

    Study of RR interval time series for Congestive Heart Failure had been an area of study with different methods including non-linear methods. In this article the cardiac dynamics of heart beat are explored in the light of complex network analysis, viz. visibility graph method. Heart beat (RR Interval) time series data taken from Physionet database [46, 47] belonging to two groups of subjects, diseased (congestive heart failure) (29 in number) and normal (54 in number) are analyzed with the technique. The overall results show that a quantitative parameter can significantly differentiate between the diseased subjects and the normal subjects as well as different stages of the disease. Further, the data when split into periods of around 1 hour each and analyzed separately, also shows the same consistent differences. This quantitative parameter obtained using the visibility graph analysis thereby can be used as a potential bio-marker as well as a subsequent alarm generation mechanism for predicting the onset of Congestive Heart Failure.

  11. Considerations on assessment of different time depending models adequacy

    Constantinescu, C.

    2015-01-01

    The operating period of nuclear power plants can be prolonged if it can be shown that their safety has remained on a high level, and for this, it is necessary to estimate how the aged systems, structures and components (SSCs) influence the NPP reliability and safety. To emphasize the ageing aspects the case study presented in this paper will assess different time depending models for rate of occurrence of failures with the goal to obtain the best fitting model. A sensitivity analysis for the impact of burn-in failures was performed to improve the result of the goodness of fit test. Based on the analysis results, a conclusion about the existence or the absence of an ageing trend could be developed. A sensitivity analysis regarding of the reliability parameters was performed, and the results were used to observe the impact over the time-dependent rate of occurrence of failures. (authors)

  12. Employment status at time of first hospitalization for heart failure is associated with a higher risk of death and rehospitalization for heart failure

    Rørth, Rasmus; Fosbøl, Emil L; Mogensen, Ulrik M

    2018-01-01

    AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization in a nation......AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization...

  13. Predictive Simulation of Material Failure Using Peridynamics -- Advanced Constitutive Modeling, Verification and Validation

    2016-03-31

    AFRL-AFOSR-VA-TR-2016-0309 Predictive simulation of material failure using peridynamics- advanced constitutive modeling, verification , and validation... Self -explanatory. 8. PERFORMING ORGANIZATION REPORT NUMBER. Enter all unique alphanumeric report numbers assigned by the performing organization, e.g...for public release. Predictive simulation of material failure using peridynamics-advanced constitutive modeling, verification , and validation John T

  14. Effects of the combined action of axial and transversal loads on the failure time of a wooden beam under fire

    Nubissie, A.; Kingne Talla, E.; Woafo, P.

    2012-01-01

    Highlights: ► A wooden beam submitted to fire and axial and transversal loads is considered. ► The failure time is found to increase with the intensity of the loads. ► Implication for safety consideration is indicated. -- Abstract: This paper presents the variations of the failure time of a wooden beam (Baillonella toxisperma also called Moabi) in fire subjected to the combined effect of axial and transversal loads. Using the recommendation of the structural Eurocodes that the failure can occur when the deflection attains 1/300 of the length of the beam or when the bending moment attains the resistant moment, the partial differential equation describing the beam dynamics is solved numerically and the failure time calculated. It is found that the failure time decreases when either the axial or transversal loads increases.

  15. [Refractory heart failure. Models of hospital, ambulatory, and home management].

    Oliva, Fabrizio; Alunni, Gianfranco

    2002-08-01

    Chronic heart failure is an enormous and growing public health problem and is reaching epidemic proportions. Its economic impact is dramatic; two thirds of expenses are for hospitalizations and relatively little is being spent for medications and outpatient visits. Most of the hospitalizations, deaths and costs are incurred by a relatively small minority of patients who may be described as having "complex", "advanced", "refractory" or "end-stage" heart failure; however, in essence they are patients who have severe symptoms and/or recurrent hospitalizations and/or emergency department visits despite maximal oral therapy. Many of the recommendations regarding the management of these patients are based more on experience than on evidence from controlled trials. This, because such patients require an individualized therapy which limits their inclusion in large trials and because support is less easily available when testing specific strategies than when testing specific agents. Improving the treatment of this group of patients by optimizing their medical regimen, aggressive monitoring and providing early intervention to avert heart failure can reduce their morbidity, mortality and costs of care. Refractory heart failure is not a single disease and it is extremely unlikely that all patients should be treated in a similar manner; before selecting the appropriate therapy, the clinician must categorize and profile the patient. The first step should be a re-evaluation of the previous treatment because many patients are treated suboptimally. It is also important to identify reversible or precipitating factors. For patients with advanced heart failure, the initial goal of therapy is to improve symptoms; the next goal is to maintain the improvement and to prevent later deterioration. The appropriate treatment plan will reflect the presence of comorbidities, the patients' history regarding previous responses to therapy, their own expectations with regard to daily life. The most

  16. A multiple shock model for common cause failures using discrete Markov chain

    Chung, Dae Wook; Kang, Chang Soon

    1992-01-01

    The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis

  17. Semi-parametric proportional intensity models robustness for right-censored recurrent failure data

    Jiang, S.T. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States); Landers, T.L. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)]. E-mail: landers@ou.edu; Rhoads, T.R. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)

    2005-10-01

    This paper reports the robustness of the four proportional intensity (PI) models: Prentice-Williams-Peterson-gap time (PWP-GT), PWP-total time (PWP-TT), Andersen-Gill (AG), and Wei-Lin-Weissfeld (WLW), for right-censored recurrent failure event data. The results are beneficial to practitioners in anticipating the more favorable engineering application domains and selecting appropriate PI models. The PWP-GT and AG prove to be models of choice over ranges of sample sizes, shape parameters, and censoring severity. At the smaller sample size (U=60), where there are 30 per class for a two-level covariate, the PWP-GT proves to perform well for moderate right-censoring (P {sub c}{<=}0.8), where 80% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8{<=}{delta}{<=}1.8). For the large sample size (U=180), the PWP-GT performs well for severe right-censoring (0.8

    failures (power-law NHPP shape parameter in the range of 0.8{<=}{delta}{<=}2.0). The AG model proves to outperform the PWP-TT and WLW for stationary processes (HPP) across a wide range of right-censorship (0.0{<=}P {sub c}{<=}1.0) and for sample sizes of 60 or more.

  18. Our sun. I. The standard model: Successes and failures

    Sackmann, I.J.; Boothroyd, A.I.; Fowler, W.A.

    1990-01-01

    The results of computing a number of standard solar models are reported. A presolar helium content of Y = 0.278 is obtained, and a Cl-37 capture rate of 7.7 SNUs, consistently several times the observed rate of 2.1 SNUs, is determined. Thus, the solar neutrino problem remains. The solar Z value is determined primarily by the observed Z/X ratio and is affected very little by differences in solar models. Even large changes in the low-temperature molecular opacities have no effect on Y, nor even on conditions at the base of the convective envelope. Large molecular opacities do cause a large increase in the mixing-length parameter alpha but do not cause the convective envelope to reach deeper. The temperature remains too low for lithium burning, and there is no surface lithium depletion; thus, the lithium problem of the standard solar model remains. 103 refs

  19. Enhancement of weld failure and tube ejection model in PENTAP program

    Jung, Jaehoon; An, Sang Mo; Ha, Kwang Soon; Kim, Hwan Yeol

    2014-01-01

    The reactor vessel pressure, the debris mass, the debris temperature, and the component of material can have an effect on the penetration tube failure modes. Furthermore, these parameters are interrelated. There are some representative severe accident codes such as MELCOR, MAAP, and PENTAP program. MELCOR decides on a penetration tube failure by its failure temperature such as 1273K simply. MAAP considers all penetration failure modes and has the most advanced model for a penetration tube failure model. However, the validation work against the experimental data is very limited. PENTAP program which evaluates the possible penetration tube failure modes such as creep failure, weld failure, tube ejection, and a long term tube failure under given accident condition was developed by KAERI. The experiment for the tube ejection is being performed by KAERI. The temperature distribution and the ablation rate of both weld and lower vessel wall can be obtained through the experiment. This paper includes the updated calculation steps for the weld failure and the tube ejection modes of the PENTAP program to apply the experimental results. PENTAP program can evaluate the possible penetration tube failure modes. It still requires a large amount of efforts to increase the prediction of failure modes. Some calculation steps are necessary for applying the experimental and the numerical data in the PENTAP program. In this study, new calculation steps are added to PENTAP program to enhance the weld failure and tube ejection models using KAERI's experimental data which are the ablation rate and temperature distribution of weld and lower vessel wall

  20. Physics of Failure Models for Capacitor Degradation in DC-DC Converters

    National Aeronautics and Space Administration — This paper proposes a combined energy-based model with an empirical physics of failure model for degradation analysis and prognosis of electrolytic capacitors in...

  1. Use on non-conjugate prior distributions in compound failure models. Final technical report

    Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.

    1981-12-01

    Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report

  2. Plan on test to failure of a prestressed concrete containment vessel model

    Takumi, K.; Nonaka, A.; Umeki, K.; Nagata, K.; Soejima, M.; Yamaura, Y.; Costello, J.F.; Riesemann, W.A. von.; Parks, M.B.; Horschel, D.S.

    1992-01-01

    A summary of the plans to test a prestressed concrete containment vessel (PCCV) model to failure is provided in this paper. The test will be conducted as a part of a joint research program between the Nuclear Power Engineering Corporation (NUPEC), the United States Nuclear Regulatory Commission (NRC), and Sandia National Laboratories (SNL). The containment model will be a scaled representation of a PCCV for a pressurized water reactor (PWR). During the test, the model will be slowly pressurized internally until failure of the containment pressure boundary occurs. The objectives of the test are to measure the failure pressure, to observe the mode of failure, and to record the containment structural response up to failure. Pre- and posttest analyses will be conducted to forecast and evaluate the test results. Based on these results, a validated method for evaluating the structural behavior of an actual PWR PCCV will be developed. The concepts to design the PCCV model are also described in the paper

  3. Early Detection of Plant Equipment Failures: A Case Study in Just-in-Time Maintenance

    Parlos, Alexander G.; Kim, Kyusung; Bharadwaj, Raj M.

    2001-01-01

    The development and testing of a model-based fault detection system for electric motors is briefly presented. The fault detection system was developed using only motor nameplate information. The fault detection results presented utilize only motor voltage and current sensor information, minimizing the need for expensive or intrusive sensors. Dynamic recurrent neural networks are used to predict the input-output response of a three-phase induction motor while using an estimate of the motor speed signal. Multiresolution (or wavelet) signal-processing techniques are used in combination with more traditional methods to estimate fault features for use in winding insulation and motor mechanical and electromechanical failure detection

  4. Early Detection of Plant Equipment Failures: A Case Study in Just-in-Time Maintenance

    Parlos, Alexander G.; Kim, Kyusung; Bharadwaj, Raj M.

    2001-06-17

    The development and testing of a model-based fault detection system for electric motors is briefly presented. The fault detection system was developed using only motor nameplate information. The fault detection results presented utilize only motor voltage and current sensor information, minimizing the need for expensive or intrusive sensors. Dynamic recurrent neural networks are used to predict the input-output response of a three-phase induction motor while using an estimate of the motor speed signal. Multiresolution (or wavelet) signal-processing techniques are used in combination with more traditional methods to estimate fault features for use in winding insulation and motor mechanical and electromechanical failure detection.

  5. Time course of recovery following resistance training leading or not to failure.

    Morán-Navarro, Ricardo; Pérez, Carlos E; Mora-Rodríguez, Ricardo; de la Cruz-Sánchez, Ernesto; González-Badillo, Juan José; Sánchez-Medina, Luis; Pallarés, Jesús G

    2017-12-01

    To describe the acute and delayed time course of recovery following resistance training (RT) protocols differing in the number of repetitions (R) performed in each set (S) out of the maximum possible number (P). Ten resistance-trained men undertook three RT protocols [S × R(P)]: (1) 3 × 5(10), (2) 6 × 5(10), and (3) 3 × 10(10) in the bench press (BP) and full squat (SQ) exercises. Selected mechanical and biochemical variables were assessed at seven time points (from - 12 h to + 72 h post-exercise). Countermovement jump height (CMJ) and movement velocity against the load that elicited a 1 m s -1 mean propulsive velocity (V1) and 75% 1RM in the BP and SQ were used as mechanical indicators of neuromuscular performance. Training to muscle failure in each set [3 × 10(10)], even when compared to completing the same total exercise volume [6 × 5(10)], resulted in a significantly higher acute decline of CMJ and velocity against the V1 and 75% 1RM loads in both BP and SQ. In contrast, recovery from the 3 × 5(10) and 6 × 5(10) protocols was significantly faster between 24 and 48 h post-exercise compared to 3 × 10(10). Markers of acute (ammonia, growth hormone) and delayed (creatine kinase) fatigue showed a markedly different course of recovery between protocols, suggesting that training to failure slows down recovery up to 24-48 h post-exercise. RT leading to failure considerably increases the time needed for the recovery of neuromuscular function and metabolic and hormonal homeostasis. Avoiding failure would allow athletes to be in a better neuromuscular condition to undertake a new training session or competition in a shorter period of time.

  6. Failure-censored accelerated life test sampling plans for Weibull distribution under expected test time constraint

    Bai, D.S.; Chun, Y.R.; Kim, J.G.

    1995-01-01

    This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated

  7. Measuring Time to Biochemical Failure in the TROG 96.01 Trial: When Should the Clock Start Ticking?

    Denham, James W.; Steigler, Allison; Kumar, Mahesh; Lamb, David S.; Joseph, David; Spry, Nigel A.; Tai, Keen-Hun; Atkinson, Chris; Turner, Sandra FRANZCR; Greer, Peter B.; Gleeson, Paul S.; D'Este, Catherine

    2009-01-01

    Purpose: We sought to determine whether short-term neoadjuvant androgen deprivation (STAD) duration influences the optimal time point from which Phoenix fail (time to biochemical failure; TTBF) should be measured. Methods and Materials: In the Trans-Tasman Radiation Oncology Group 96.01 trial, men with locally advanced prostate cancer were randomized to 3 or 6 months STAD before and during prostatic irradiation (XRT) or to XRT alone. The prognostic value of TTBF measured from the end of radiation (ERT) and randomization were compared using Cox models. Results: Between 1996 and 2000, 802 eligible patients were randomized. In 436 men with Phoenix failure, TTBF measured from randomization was a powerful predictor of prostate cancer-specific survival and marginally more accurate than TTBF measured from ERT in Cox models. Insufficient data were available to confirm that TTBF measured from testosterone recovery may also be a suitable option. Conclusions: TTBF measured from randomization (commencement of therapy) performed well in this trial dataset and will be a convenient option if this finding holds in other datasets that include long-term androgen deprivation data.

  8. Correlation model to analyze dependent failures for probabilistic risk assessment

    Dezfuli, H.

    1985-01-01

    A methodology is formulated to study the dependent (correlated) failures of various abnormal events in nuclear power plants. This methodology uses correlation analysis is a means for predicting and quantifying the dependent failures. Appropriate techniques are also developed to incorporate the dependent failure in quantifying fault trees and accident sequences. The uncertainty associated with each estimation in all of the developed techniques is addressed and quantified. To identify the relative importance of the degree of dependency (correlation) among events and to incorporate these dependencies in the quantification phase of PRA, the interdependency between a pair of events in expressed with the aid of the correlation coefficient. For the purpose of demonstrating the methodology, the data base used in the Accident Sequence Precursor Study (ASP) was adopted and simulated to obtain distributions for the correlation coefficients. A computer program entitled Correlation Coefficient Generator (CCG) was developed to generate a distribution for each correlation coefficient. The method of bootstrap technique was employed in the CCG computer code to determine confidence limits of the estimated correlation coefficients. A second computer program designated CORRELATE was also developed to obtain probability intervals for both fault trees and accident sequences with statistically correlated failure data

  9. Association between prehospital time interval and short-term outcome in acute heart failure patients.

    Takahashi, Masashi; Kohsaka, Shun; Miyata, Hiroaki; Yoshikawa, Tsutomu; Takagi, Atsutoshi; Harada, Kazumasa; Miyamoto, Takamichi; Sakai, Tetsuo; Nagao, Ken; Sato, Naoki; Takayama, Morimasa

    2011-09-01

    Acute heart failure (AHF) is one of the most frequently encountered cardiovascular conditions that can seriously affect the patient's prognosis. However, the importance of early triage and treatment initiation in the setting of AHF has not been recognized. The Tokyo Cardiac Care Unit Network Database prospectively collected information of emergency admissions to acute cardiac care facilities in 2005-2007 from 67 participating hospitals in the Tokyo metropolitan area. We analyzed records of 1,218 AHF patients transported to medical centers via emergency medical services (EMS). AHF was defined as rapid onset or change in the signs and symptoms of heart failure, resulting in the need for urgent therapy. Patients with acute coronary syndrome were excluded from this analysis. Logistic regression analysis was performed to calculate the risk-adjusted in-hospital mortality. A majority of the patients were elderly (76.1 ± 11.5 years old) and male (54.1%). The overall in-hospital mortality rate was 6.0%. The median time interval between symptom onset and EMS arrival (response time) was 64 minutes (interquartile range [IQR] 26-205 minutes), and that between EMS arrival and ER arrival (transportation time) was 27 minutes (IQR 9-78 minutes). The risk-adjusted mortality increased with transportation time, but did not correlate with the response time. Those who took >45 minutes to arrive at the medical centers were at a higher risk for in-hospital mortality (odds ratio 2.24, 95% confidence interval 1.17-4.31; P = .015). Transportation time correlated with risk-adjusted mortality, and steps should be taken to reduce the EMS transfer time to improve the outcome in AHF patients. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Modelling of Diffuse Failure and Fluidization in geo materials and Geo structures

    Pastor, M.

    2013-01-01

    Failure of geo structures is caused by changes in effective stresses induced by external loads (earthquakes, for instance), change in the pore pressures (rain), in the geometry (erosion), or in materials properties (chemical attack, degradation, weathering). Landslides can by analysed as the failure of a geo structure, the slope. There exist many alternative classifications of landslides can be analyzed as the failure of a geo structure, the slope. There exist many alternative classifications of landslides, but we will consider here a simple classification into slides and flows. In the case of slides, the failure consists on the movement of a part of the slope with deformations which concentrate in a narrow zone, the failure surface. This can be idealized as localized failure, and it is typical of over consolidated or dense materials exhibiting softening. On the other hand, flows are made of fluidized materials, flowing in a fluid like manner. This mechanism of failure is known as diffuse failure, and has received much less attention by researchers. Modelling of diffuse failure of slopes is complex, because there appear difficulties in the mathematical, constitutive and numerical models, which have to account for a phase transition. This work deals with modeling, and we will present here some tools recently developed by the author and the group to which he belongs. (Author)

  11. Implementation of a PETN failure model using ARIA's general chemistry framework

    Hobbs, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-01

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model, implementation, and validation.

  12. Statistical study on applied stress dependence of failure time in stress corrosion cracking of Zircaloy-4 alloy

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi; Tanaka, Akiei.

    1988-01-01

    Effects of applied stress on failure time in stress corrosion cracking of Zircaloy-4 alloy were investigated by Weibull distribution method. Test pieces in the evaculated silica tubes were annealed at 1,073 K for 7.2 x 10 3 s, and then quenched into ice-water. These species under constant applied stresses of 40∼90 % yield stress were immersed in CH 3 OH-1 w% I 2 solution at room temperature. The probability distribution of failure times under applied stress of 40 % of yield stress was described as single Weibull distribution, which had one shape parameter. The probability distributions of failure times under applied stress above 60 % of yield stress were described as composite and mixed Weibull distributions, which had the two shape parameters of Weibull distributions for the regions of the shorter time and longer one of failure. The values of these shape parameters in this study were larger than the value of 1 which corresponded to that of wear out failure. The observation of fracture surfaces and the stress dependence of the shape parameters indicated that the shape parameters both for the times of failure under 40 % of yield stress and for the longer ones above 60 % of yield stress corresponded to intergranular cracking, and that for shorter times of failure corresponded to transgranular cracking and dimple fracture. (author)

  13. Modeling of container failure and radionuclide release from a geologic nuclear waste repository

    Kim, Chang Lak; Kim, Jhin Wung; Choi, Kwang Sub; Cho, Chan Hee

    1989-02-01

    Generally, two processes are involved in leaching and dissolution; (1) chemical reactions and (2) mass transfer by diffusion. The chemical reaction controls the dissolution rates only during the early stage of exposure to groundwater. The exterior-field mass transfer may control the long-term dissolution rates from the waste solid in a geologic repository. Masstransfer analyses rely on detailed and careful application of the governing equations that describe the mechanistic processes of transport of material between and within phases. We develop analytical models to predict the radionuclide release rate into the groundwater with five different approaches: a measurement-based model, a diffusion model, a kinetics model, a diffusion-and-kinetics model, and a modified diffusion model. We also collected experimental leaching data for a partial validation of the radionuclide release model based on the mass transfer theory. Among various types of corrosions, pitting is the most significant because of its rapid growth. The failure time of the waste container, which also can be interpreted as the containment time, is a milestone of the performance of a repository. We develop analytical models to predict the pit growth rate on the container surface with three different approaches: an experimental method, a statistical method, and a mathematical method based on the diffusion theory. (Author)

  14. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A model for the coupling of failure rates in a redundant system

    Kleppmann, W.G.; Wutschig, R.

    1986-01-01

    A model is developed which takes into acount the coupling between failure rates or identical components in different redundancies of a safety system, i.e., the fact that the failure rates of identical components subjected to the same operating conditions will scatter less than the failure rates of any two components of the same type. It is shown that with increasing coupling the expectation value and the variance of the distribution of the failure probability of the redundant system increases. A consistent way to incorporate operating experience in a Bayesian framework is developed and the reults are presented. (orig.)

  16. Identification of hidden failures in control systems: a functional modelling approach

    Jalashgar, A.; Modarres, M.

    1996-01-01

    This paper presents a model which encompasses knowledge about a process control system's functionalities in a function-oriented failure analysis task. The technique called Hybrid MFM-GTST, mainly utilizes two different function - oriented methods (MFM and GTST) to identify all functions of the system components, and hence possible sources of hidden failures in process control systems. Hidden failures are referred to incipient failures within the system that in long term may lead to loss of major functions. The features of the method are described and demonstrated by using an example of a process control system

  17. Experiment study on failure mechanism of Bai Huichang landslide and analysis on time effect of deformation

    Ronghua, Fu; Baokui, Yao; Yuke, Sun

    1985-01-01

    Bai Huichang landslide is a large scale landslide which is of the character of leveled pushing slide and collapse. To study the failure mechanism of the landslide, to analyse the reasons for failure of the landslide, to evaluate and to predict the stability of the slope, systematic tests of physico-mechanical properties of the clay rock on the sliding surface and analysis of the constituents of the substances are made. Tests on slope models made of photo-elastic material and of blocks are made. The results show that the landslide is a typical one with leveled pushing slide and collapse character, and the main reason for the landslide is the poor physico-mechanical properties and the poor water-stable properties of the clay rock which contain a vast amount of the montmorillonite. The deformation of the slope model is very similar to that of the actual slope. Regression analysis of the observed deformation of the slope indicates that the deformation decays at a rate about 70% each year. It means that the landslide will tend to be stable and no serious landslide will occur which will endanger the safety of Changhangou Colliery. 3 references.

  18. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  19. An Enhanced Preventive Maintenance Optimization Model Based on a Three-Stage Failure Process

    Ruifeng Yang

    2015-01-01

    Full Text Available Nuclear power plants are highly complex systems and the issues related to their safety are of primary importance. Probabilistic safety assessment is regarded as the most widespread methodology for studying the safety of nuclear power plants. As maintenance is one of the most important factors for affecting the reliability and safety, an enhanced preventive maintenance optimization model based on a three-stage failure process is proposed. Preventive maintenance is still a dominant maintenance policy due to its easy implementation. In order to correspond to the three-color scheme commonly used in practice, the lifetime of system before failure is divided into three stages, namely, normal, minor defective, and severe defective stages. When the minor defective stage is identified, two measures are considered for comparison: one is that halving the inspection interval only when the minor defective stage is identified at the first time; the other one is that if only identifying the minor defective stage, the subsequent inspection interval is halved. Maintenance is implemented immediately once the severe defective stage is identified. Minimizing the expected cost per unit time is our objective function to optimize the inspection interval. Finally, a numerical example is presented to illustrate the effectiveness of the proposed models.

  20. Modelling of damage development and ductile failure in welded joints

    Nielsen, Kim Lau

    , a study of the damage development in Resistance SpotWelded joints, when subject to the commonly used static shear-lab or cross-tension testing techniques, has been carried out ([P3]-[P6]). The focus in thesis is on the Advanced High Strength Steels, Dual-Phase 600, which is used in for example......This thesis focuses on numerical analysis of damage development and ductile failure in welded joints. Two types of welds are investigated here. First, a study of the localization of plastic flow and failure in aluminum sheets, welded by the relatively new Friction Stir (FS) Welding method, has been...... conducted ([P1], [P2], [P7]-[P9]). The focus in the thesis is on FS-welded 2xxx and 6xxx series of aluminum alloys, which are attractive, for example, to the aerospace industry, since the 2024 aluminum in particular, is typically classified as un-weldable by conventional fusion welding techniques. Secondly...

  1. Modeling Freedom From Progression for Standard-Risk Medulloblastoma: A Mathematical Tumor Control Model With Multiple Modes of Failure

    Brodin, N. Patrik; Vogelius, Ivan R.; Björk-Eriksson, Thomas; Munck af Rosenschöld, Per; Bentzen, Søren M.

    2013-01-01

    Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used to model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available

  2. UAV Swarm Behavior Modeling for Early Exposure of Failure Modes

    2016-09-01

    have felt like an absentee husband and father, through the rigor and struggles of completing this thesis, they not only continuously provided support...to understand their specific 12 product , they must now have a firm understanding of how their product fits into a plethora of other systems...mission in Monterey Phoenix (MP) proved to provide valuable insight into identifying failure modes and failsafe behaviors. A product of this research

  3. Mechanical modelling of transient- to- failure SFR fuel cladding

    Feria, F.; Herranz, L. E.

    2014-07-01

    The response of Sodium Fast Reactor (SFR) fuel rods to transient accident conditions is an important safety concern. During transients the cladding strain caused by the stress due to pellet cladding mechanical interaction (PCMI) can lead to failure. Due to the fact that SFR fuel rods are commonly clad with strengthened material made of stainless steel (SS), cladding is usually treated as an elastic-perfectly-plastic material. However, viscoplastic behaviour can contribute to mechanical strain at high temperature (> 1000 K). (Author)

  4. Using recurrent neural network models for early detection of heart failure onset.

    Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng

    2017-03-01

    We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  5. A model for quantification of temperature profiles via germination times

    Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik

    2013-01-01

    Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...

  6. The analysis of failure data in the presence of critical and degraded failures

    Haugen, Knut; Hokstad, Per; Sandtorv, Helge

    1997-01-01

    Reported failures are often classified into severityclasses, e.g., as critical or degraded. The critical failures correspond to loss of function(s) and are those of main concern. The rate of critical failures is usually estimated by the number of observed critical failures divided by the exposure time, thus ignoring the observed degraded failures. In the present paper failure data are analyzed, applying an alternative estimate for the critical failure rate, also taking the number of observed degraded failures into account. The model includes two alternative failure mechanisms, one being of the shock type, immediately leading to a critical failure, another resulting in a gradual deterioration, leading to a degraded failure before the critical failure occurs. Failure data on safety valves from the OREDA (Offshore REliability DAta) data base are analyzed using this model. The estimate for the critical failure rate is obtained and compared with the standard estimate

  7. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  8. Revised risk priority number in failure mode and effects analysis model from the perspective of healthcare system

    Fatemeh Rezaei

    2018-01-01

    Full Text Available Background: Methodology of Failure Mode and Effects Analysis (FMEA is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1 defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss, occurrence of incident (time - unit occurrence and degree of exposure to risk and preventability (degree of preventability and defensive barriers then, (2 risks priority criteria quantified by using RPN index (361 for the highest rate failure. The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models.

  9. Comparison of US/FRG accident condition models for HTGR fuel failure and radionuclide release

    Verfondern, K.

    1991-03-01

    The objective was to compare calculation models used in safety analyses in the US and FRG which describe fission product release behavior from TRISO coated fuel particles under core heatup accident conditions. The frist step performed is the qualitative comparison of both sides' fuel failure and release models in order to identify differences and similarities in modeling assumptions and inputs. Assumptions of possible particle failure mechanisms under accident conditions (SiC degradation, pressure vessel) are principally the same on both sides though they are used in different modeling approaches. The characterization of a standard (= intact) coated particle to be of non-releasing (GA) or possibly releasing (KFA/ISF) type is one of the major qualitative differences. Similar models are used regarding radionuclide release from exposed particle kernels. In a second step, a quantitative comparison of the calculation models was made by assessing a benchmark problem predicting particle failure and radionuclide release under MHTGR conduction cooldown accident conditions. Calculations with each side's reference method have come to almost the same failure fractions after 250 hours for the core region with maximum core heatup temperature despite the different modeling approaches of SORS and PANAMA-I. The comparison of the results of particle failure obtained with the Integrated Failure and Release Model for Standard Particles and its revision provides a 'verification' of these models in this sense that the codes (SORS and PANAMA-II, and -III, respectively) which were independently developed lead to very good agreement in the predictions. (orig./HP) [de

  10. Applicability of out-of-pile fretting wear tests to in-reactor fretting wear-induced failure time prediction

    Kim, Kyu-Tae

    2013-02-01

    In order to investigate whether or not the grid-to-rod fretting wear-induced fuel failure will occur for newly developed spacer grid spring designs for the fuel lifetime, out-of-pile fretting wear tests with one or two fuel assemblies are to be performed. In this study, the out-of-pile fretting wear tests were performed in order to compare the potential for wear-induced fuel failure in two newly-developed, Korean PWR spacer grid designs. Lasting 20 days, the tests simulated maximum grid-to-rod gap conditions and the worst flow induced vibration effects that might take place over the fuel life time. The fuel rod perforation times calculated from the out-of-pile tests are greater than 1933 days for 2 μm oxidized fuel rods with a 100 μm grid-to-rod gap, whereas those estimated from in-reactor fretting wear failure database may be about in the range of between 60 and 100 days. This large discrepancy in fuel rod perforation may occur due to irradiation-induced cladding oxide microstructure changes on the one hand and a temperature gradient-induced hydrogen content profile across the cladding metal region on the other hand, which may accelerate brittleness in the grid-contacting cladding oxide and metal regions during the reactor operation. A three-phase grid-to-rod fretting wear model is proposed to simulate in-reactor fretting wear progress into the cladding, considering the microstructure changes of the cladding oxide and the hydrogen content profile across the cladding metal region combined with the temperature gradient. The out-of-pile tests cannot be directly applicable to the prediction of in-reactor fretting wear-induced cladding perforations but they can be used only for evaluating a relative wear resistance of one grid design against the other grid design.

  11. Comparison of a fuel sheath failure model with published experimental data

    Varty, R.L.; Rosinger, H.E.

    1982-01-01

    A fuel sheath failure model has been compared with the published results of experiments in which a Zircaloy-4 fuel sheath was subjected to a temperature ramp and a differential pressure until failure occurred. The model assumes that the deformation of the sheath is controlled by steady-state creep and that there is a relationship between tangential stress and temperature at the instant of failure. The sheath failure model predictions agree reasonably well with the experimental data. The burst temperature is slightly overpredicted by the model. The burst strain is overpredicted for small experimental burst strains but is underpredicted otherwise. The reasons for these trends are discussed and the extremely wide variation in burst strain reported in the literature is explained using the model

  12. Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection

    Y Damchi

    2013-12-01

    Full Text Available Appropriate operation of protection system is one of the effective factors to have a desirable reliability in power systems, which vitally needs routine test of protection system. Precise determination of optimum routine test time interval (ORTTI plays a vital role in predicting the maintenance costs of protection system. In the most previous studies, ORTTI has been determined while remote back-up protection system was considered fully reliable. This assumption is not exactly correct since remote back-up protection system may operate incorrectly or fail to operate, the same as the primary protection system. Therefore, in order to determine the ORTTI, an extended Markov model is proposed in this paper considering failure probability for remote back-up protection system. In the proposed Markov model of the protection systems, monitoring facility is taken into account. Moreover, it is assumed that the primary and back-up protection systems are maintained simultaneously. Results show that the effect of remote back-up protection system failures on the reliability indices and optimum routine test intervals of protection system is considerable.

  13. Modeling cascading failures with the crisis of trust in social networks

    Yi, Chengqi; Bao, Yuanyuan; Jiang, Jingchi; Xue, Yibo

    2015-10-01

    In social networks, some friends often post or disseminate malicious information, such as advertising messages, informal overseas purchasing messages, illegal messages, or rumors. Too much malicious information may cause a feeling of intense annoyance. When the feeling exceeds a certain threshold, it will lead social network users to distrust these friends, which we call the crisis of trust. The crisis of trust in social networks has already become a universal concern and an urgent unsolved problem. As a result of the crisis of trust, users will cut off their relationships with some of their untrustworthy friends. Once a few of these relationships are made unavailable, it is likely that other friends will decline trust, and a large portion of the social network will be influenced. The phenomenon in which the unavailability of a few relationships will trigger the failure of successive relationships is known as cascading failure dynamics. To our best knowledge, no one has formally proposed cascading failures dynamics with the crisis of trust in social networks. In this paper, we address this potential issue, quantify the trust between two users based on user similarity, and model the minimum tolerance with a nonlinear equation. Furthermore, we construct the processes of cascading failures dynamics by considering the unique features of social networks. Based on real social network datasets (Sina Weibo, Facebook and Twitter), we adopt two attack strategies (the highest trust attack (HT) and the lowest trust attack (LT)) to evaluate the proposed dynamics and to further analyze the changes of the topology, connectivity, cascading time and cascade effect under the above attacks. We numerically find that the sparse and inhomogeneous network structure in our cascading model can better improve the robustness of social networks than the dense and homogeneous structure. However, the network structure that seems like ripples is more vulnerable than the other two network

  14. Crack phantoms: localized damage correlations and failure in network models of disordered materials

    Zaiser, M; Moretti, P; Lennartz-Sassinek, S

    2015-01-01

    We study the initiation of failure in network models of disordered materials such as random fuse and spring models, which serve as idealized representations of fracture processes in quasi-two-dimensional, disordered material systems. We consider two different geometries, namely rupture of thin sheets and delamination of thin films, and demonstrate that irrespective of geometry and implementation of the disorder (random failure thresholds versus dilution disorder) failure initiation is associated with the emergence of typical localized correlation structures in the damage patterns. These structures (‘crack phantoms’) exhibit well-defined characteristic lengths, which relate to the failure stress by scaling relations that are typical for critical crack nuclei in disorder-free materials. We discuss our findings in view of the fundamental nature of failure processes in materials with random microstructural heterogeneity. (paper)

  15. Physical and theoretical modeling of rock slopes against block-flexure toppling failure

    Mehdi Amini

    2015-12-01

    Full Text Available Block-flexure is the most common mode of toppling failure in natural and excavated rock slopes. In such failure, some rock blocks break due to tensile stresses and some overturn under their own weights and then all of them topple together. In this paper, first, a brief review of previous studies on toppling failures is presented. Then, the physical and mechanical properties of experimental modeling materials are summarized. Next, the physical modeling results of rock slopes with the potential of block-flexural toppling failures are explained and a new analytical solution is proposed for the stability analysis of such slopes. The results of this method are compared with the outcomes of the experiments. The comparative studies show that the proposed analytical approach is appropriate for the stability analysis of rock slopes against block-flexure toppling failure. Finally, a real case study is used for the practical verification of the suggested method.

  16. An Integrated Model to Predict Corporate Failure of Listed Companies in Sri Lanka

    Nisansala Wijekoon

    2015-07-01

    Full Text Available The primary objective of this study is to develop an integrated model to predict corporate failure of listed companies in Sri Lanka. The logistic regression analysis was employed to a data set of 70 matched-pairs of failed and non-failed companies listed in the Colombo Stock Exchange (CSE in Sri Lanka over the period 2002 to 2010. A total of fifteen financial ratios and eight corporate governance variables were used as predictor variables of corporate failure. Analysis of the statistical testing results indicated that model consists with both corporate governance variables and financial ratios improved the prediction accuracy to reach 88.57 per cent one year prior to failure. Furthermore, predictive accuracy of this model in all three years prior to failure is above 80 per cent. Hence model is robust in obtaining accurate results for up to three years prior to failure. It was further found that two financial ratios, working capital to total assets and cash flow from operating activities to total assets, and two corporate governance variables, outside director ratio and company audit committee are having more explanatory power to predict corporate failure. Therefore, model developed in this study can assist investors, managers, shareholders, financial institutions, auditors and regulatory agents in Sri Lanka to forecast corporate failure of listed companies.

  17. Genetic variants of age at menopause are not related to timing of ovarian failure in breast cancer survivors.

    Homer, Michael V; Charo, Lindsey M; Natarajan, Loki; Haunschild, Carolyn; Chung, Karine; Mao, Jun J; DeMichele, Angela M; Su, H Irene

    2017-06-01

    To determine if interindividual genetic variation in single-nucleotide polymorphisms (SNPs) related to age at natural menopause is associated with risk of ovarian failure in breast cancer survivors. A prospective cohort of 169 premenopausal breast cancer survivors recruited at diagnosis with stages 0 to III disease were followed longitudinally for menstrual pattern via self-reported daily menstrual diaries. Participants were genotyped for 13 SNPs previously found to be associated with age at natural menopause: EXO1, TLK1, HELQ, UIMC1, PRIM1, POLG, TMEM224, BRSK1, and MCM8. A risk variable summed the total number of risk alleles in each participant. The association between individual genotypes, and also the risk variable, and time to ovarian failure (>12 months of amenorrhea) was tested using time-to-event methods. Median age at enrollment was 40.5 years (range 20.6-46.1). The majority of participants were white (69%) and underwent chemotherapy (76%). Thirty-eight participants (22%) experienced ovarian failure. None of the candidate SNPs or the summary risk variable was significantly associated with time to ovarian failure. Sensitivity analysis restricted to whites or only to participants receiving chemotherapy yielded similar findings. Older age, chemotherapy exposure, and lower body mass index were related to shorter time to ovarian failure. Thirteen previously identified genetic variants associated with time to natural menopause were not related to timing of ovarian failure in breast cancer survivors.

  18. Variable selection for mixture and promotion time cure rate models.

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  19. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    Yoshida, Yoshitaka; Ohtani, Masanori; Fujita, Yushi

    2002-01-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  20. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    Yoshida, Yoshitaka; Ohtani, Masanori [Institute of Nuclear Safety System, Inc., Mihama, Fukui (Japan); Fujita, Yushi [TECNOVA Corp., Tokyo (Japan)

    2002-09-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  1. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    Coolen, F.P.A.

    1997-01-01

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  2. FRESS pin failure model and its application to E-8 TREAT test

    Kalimullah.

    1979-01-01

    FRESS is a cladding rupture prediction model for an irradiated mixed-oxide LMFBR fuel pin during transient heating based only on the internal pressurization of the cladding by the fission gas released from fuel grains during the transient. The model is applied to the analysis of the hottest PNL-10-53 pin in the 7-pin E-8 TREAT test which simulates a $3/sec transient overpower. Although the uncertainties of the inputs to the temperature calculation done with the COBRA code have not been included, the uncertain input parameters to FRESS have been varied over their estimated uncertainties. The cladding rupture predictions are a few tens of milliseconds late compared to the most probable failure time detected in the test. However, these calculations seem to indicate that fisson gas pressure is a significant mechanism for causing clad rupture in this test

  3. A phenomenological variational multiscale constitutive model for intergranular failure in nanocrystalline materials

    Siddiq, A.; El Sayed, Tamer S.

    2013-01-01

    We present a variational multiscale constitutive model that accounts for intergranular failure in nanocrystalline fcc metals due to void growth and coalescence in the grain boundary region. Following previous work by the authors, a nanocrystalline

  4. Learned Helplessness: A Model to Understand and Overcome a Child's Extreme Reaction to Failure.

    Balk, David

    1983-01-01

    The author reviews literature on childrens' reactions to perceived failure and offers "learned helplessness" as a model to explain why a child who makes a mistake gives up. Suggestions for preventing these reactions are given. (Author/JMK)

  5. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  6. Organization-and-technological model of medical care delivered to patients with chronic heart failure

    Kiselev A.R.

    2014-09-01

    Full Text Available Organization-and-technological model of medical care delivered to patients with chronic heart failure based on IDEF0 methodology and corresponded with clinical guidelines is presented.

  7. Enhancement of Physics-of-Failure Prognostic Models with System Level Features

    Kacprzynski, Gregory

    2002-01-01

    .... The novelty in the current prognostic tool development is that predictions are made through the fusion of stochastic physics-of-failure models, relevant system or component level health monitoring...

  8. Flight Control Failure Detection and Control Redistribution Using Multiple Model Adaptive Estimation with Filter Spawning

    Torres, Michael

    2002-01-01

    ...) are used together to identify failures and apply appropriate corrections. This effort explores the performance of the MMAE/FS/CR in different regions of the flight envelope using model and gain scheduling...

  9. ARRA: Reconfiguring Power Systems to Minimize Cascading Failures - Models and Algorithms

    Dobson, Ian [Iowa State University; Hiskens, Ian [Unversity of Michigan; Linderoth, Jeffrey [University of Wisconsin-Madison; Wright, Stephen [University of Wisconsin-Madison

    2013-12-16

    Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines

  10. Prediction of failure in tube hydrofonning process using a damage model

    Majzoobi, G. H.; Saniee, F. Freshteh; Shirazi, A.

    2007-01-01

    In tube hydroforming process (THP), two types of loading, internal pressure and axial feeding and in particular the combination of them, are needed to feed the material into the cavities of the die to form the workpiece into the desired shape. If the variation of pressure versus axial feeding is not determined properly, the workpiece may be buckled, wrinkled or burst during THP. The appropriate variation is normally determined by experiment which is expensive and time-consuming. In this work, numerical simulation using Johnson-Cook models for predicting the elasto-plastic response and the failure of the material are employed to obtain the best combination of internal pressure and axial feeding. The numerical simulations are examined by a number of experiments conducted in the present investigation. The results show very close agreement between the numerical simulations and the experiments, suggesting that the numerical simulations using Johnson-Cook material and failure models provide a valuable tool to examine the different parameters involved in THP

  11. High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture

    2017-11-22

    High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating

  12. Using the failure mode and effects analysis model to improve parathyroid hormone and adrenocorticotropic hormone testing

    Magnezi R

    2016-12-01

    Full Text Available Racheli Magnezi,1 Asaf Hemi,1 Rina Hemi2 1Department of Management, Public Health and Health Systems Management Program, Bar Ilan University, Ramat Gan, 2Endocrine Service Unit, Sheba Medical Center, Tel Aviv, Israel Background: Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources.Methods: A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures.Results: A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN. For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1.Conclusion: This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. Keywords: failure mode

  13. Mathematical modeling of a multi-product EMQ model with an enhanced end items issuing policy and failures in rework.

    Chiu, Yuan-Shyi Peter; Sung, Peng-Cheng; Chiu, Singa Wang; Chou, Chung-Li

    2015-01-01

    This study uses mathematical modeling to examine a multi-product economic manufacturing quantity (EMQ) model with an enhanced end items issuing policy and rework failures. We assume that a multi-product EMQ model randomly generates nonconforming items. All of the defective are reworked, but a certain portion fails and becomes scraps. When rework process ends and the entire lot of each product is quality assured, a cost reduction n + 1 end items issuing policy is used to transport finished items of each product. As a result, a closed-form optimal production cycle time is obtained. A numerical example demonstrates the practical usage of our result and confirms a significant savings in stock holding and overall production costs as compared to that of a prior work (Chiu et al. in J Sci Ind Res India, 72:435-440 2013) in the literature.

  14. Patients with heart failure and their partners with chronic illness: interdependence in multiple dimensions of time

    Nimmon L

    2018-03-01

    Full Text Available Laura Nimmon,1,2 Joanna Bates,1,3 Gil Kimel,4,5 Lorelei Lingard6 On behalf of the Heart Failure/Palliative Care Teamwork Research Group 1Centre for Health Education Scholarship, 2Department of Occupational Science and Occupational Therapy, 3Department of Family Practice, Faculty of Medicine, University of British Columbia, 4Palliative Care Program, St Paul’s Hospital, 5Department of Medicine, Division of Internal Medicine, University of British Columbia, Vancouver, BC, 6Centre for Education Research and Innovation, Department of Medicine, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada Background: Informal caregivers play a vital role in supporting patients with heart failure (HF. However, when both the HF patient and their long-term partner suffer from chronic illness, they may equally suffer from diminished quality of life and poor health outcomes. With the focus on this specific couple group as a dimension of the HF health care team, we explored this neglected component of supportive care. Materials and methods: From a large-scale Canadian multisite study, we analyzed the interview data of 13 HF patient–partner couples (26 participants. The sample consisted of patients with advanced HF and their long-term, live-in partners who also suffer from chronic illness. Results: The analysis highlighted the profound enmeshment of the couples. The couples’ interdependence was exemplified in the ways they synchronized their experience in shared dimensions of time and adapted their day-to-day routines to accommodate each other’s changing health status. Particularly significant was when both individuals were too ill to perform caregiving tasks, which resulted in the couples being in a highly fragile state. Conclusion: We conclude that the salience of this couple group’s oscillating health needs and their severe vulnerabilities need to be appreciated when designing and delivering HF team-based care. Keywords

  15. The development and application of overheating failure model of FBR steam generator tubes. 3

    Miyake, Osamu; Hamada, Hirotsugu; Tanabe, Hiromi; Wada, Yusaku; Miyakawa, Akira; Okabe, Ayao; Nakai, Ryodai; Hiroi, Hiroshi

    2002-03-01

    The model has been developed for the assessment of the overheating tube failure in an event of sodium-water reaction accident of fast breeder reactor's steam generators (SGs). The model has been applied to the Monju SG studies. Major results obtained in the studies are as follows: 1. To evaluate the structural integrity of tube material, the strength standard for 2. 25Cr-1Mo steel was established taking account of time dependent effect based on the high temperature (700-1200degC) creep data. This standard has been validated with the tube rupture simulation test data. 2. The conditions for overheating by the high temperature reaction were determined by use of the SWAT-3 experimental data. The realistic local heating conditions (reaction zone temperature and related heat transfer conditions) for the sodium-water reaction were proposed as the cosine-shaped temperature profile. 3. For the cooling effects inside of target tubes, LWR's studies of critical heat flux (CHF) and post-CHF heat transfer correlations have been examined and considered in the model. 4. The model has been validated with experimental data obtained by SWAT-3 and LLTR. The results were satisfactory with conservatism. The PFR superheater leak event in 1987 was studied, and the cause of event and the effectiveness of the improvement after the leak event could be identified by the analysis. 5. The model has been applied to the Monju SG studies. It is revealed consequently that no tube failure occurs in 100%, 40%, and 10% water flow operating conditions when an initial leak is detected by the cover gas pressure detection system. (author)

  16. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  17. Predicting failure response of spot welded joints using recent extensions to the Gurson model

    Nielsen, Kim Lau

    2010-01-01

    The plug failure modes of resistance spot welded shear-lab and cross-tension test specimens are studied, using recent extensions to the Gurson model. A comparison of the predicted mechanical response is presented when using either: (i) the Gurson-Tvergaard-Needleman model (GTN-model), (ii...... is presented. The models are applied to predict failure of specimens containing a fully intact weld nugget as well as a partly removed weld nugget to address the problems of shrinkage voids or larger weld defects. All analysis are carried out by full 3D finite element modelling....

  18. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-03-30

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  19. Novel risk stratification with time course assessment of in-hospital mortality in patients with acute heart failure.

    Takeshi Yagyu

    Full Text Available Patients with acute heart failure (AHF show various clinical courses during hospitalization. We aimed to identify time course predictors of in-hospital mortality and to establish a sequentially assessable risk model.We enrolled 1,035 consecutive AHF patients into derivation (n = 597 and validation (n = 438 cohorts. For risk assessments at admission, we utilized Get With the Guidelines-Heart Failure (GWTG-HF risk scores. We examined significant predictors of in-hospital mortality from 11 variables obtained during hospitalization and developed a risk stratification model using multiple logistic regression analysis. Across both cohorts, 86 patients (8.3% died during hospitalization. Using backward stepwise selection, we identified five time-course predictors: catecholamine administration, minimum platelet concentration, maximum blood urea nitrogen, total bilirubin, and C-reactive protein levels; and established a time course risk score that could sequentially assess a patient's risk status. The addition of a time course risk score improved the discriminative ability of the GWTG-HF risk score (c-statistics in derivation and validation cohorts: 0.776 to 0.888 [p = 0.002] and 0.806 to 0.902 [p<0.001], respectively. A calibration plot revealed a good relationship between observed and predicted in-hospital mortalities in both cohorts (Hosmer-Lemeshow chi-square statistics: 6.049 [p = 0.642] and 5.993 [p = 0.648], respectively. In each group of initial low-intermediate risk (GWTG-HF risk score <47 and initial high risk (GWTG-HF risk score ≥47, in-hospital mortality was about 6- to 9-fold higher in the high time course risk score group than in the low-intermediate time course risk score group (initial low-intermediate risk group: 20.3% versus 2.2% [p<0.001], initial high risk group: 57.6% versus 8.5% [p<0.001].A time course assessment related to in-hospital mortality during the hospitalization of AHF patients can clearly categorize a patient's on

  20. Estimation of Continuous Time Models in Economics: an Overview

    Clifford R. Wymer

    2009-01-01

    The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financia...

  1. Elastic deformation and failure in protein filament bundles: Atomistic simulations and coarse-grained modeling.

    Hammond, Nathan A; Kamm, Roger D

    2008-07-01

    The synthetic peptide RAD16-II has shown promise in tissue engineering and drug delivery. It has been studied as a vehicle for cell delivery and controlled release of IGF-1 to repair infarcted cardiac tissue, and as a scaffold to promote capillary formation for an in vitro model of angiogenesis. The structure of RAD16-II is hierarchical, with monomers forming long beta-sheets that pair together to form filaments; filaments form bundles approximately 30-60 nm in diameter; branching networks of filament bundles form macroscopic gels. We investigate the mechanics of shearing between the two beta-sheets constituting one filament, and between cohered filaments of RAD16-II. This shear loading is found in filament bundle bending or in tensile loading of fibers composed of partial-length filaments. Molecular dynamics simulations show that time to failure is a stochastic function of applied shear stress, and that for a given loading time behavior is elastic for sufficiently small shear loads. We propose a coarse-grained model based on Langevin dynamics that matches molecular dynamics results and facilities extending simulations in space and time. The model treats a filament as an elastic string of particles, each having potential energy that is a periodic function of its position relative to the neighboring filament. With insight from these simulations, we discuss strategies for strengthening RAD16-II and similar materials.

  2. Some Improved Diagnostics for Failure of The Rasch Model.

    Molenaar, Ivo W.

    1983-01-01

    Goodness of fit tests for the Rasch model are typically large-sample, global measures. This paper offers suggestions for small-sample exploratory techniques for examining the fit of item data to the Rasch model. (Author/JKS)

  3. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Biased resistor network model for electromigration failure and related phenomena in metallic lines

    Pennetta, C.; Alfinito, E.; Reggiani, L.; Fantini, F.; Demunari, I.; Scorzoni, A.

    2004-11-01

    Electromigration phenomena in metallic lines are studied by using a biased resistor network model. The void formation induced by the electron wind is simulated by a stochastic process of resistor breaking, while the growth of mechanical stress inside the line is described by an antagonist process of recovery of the broken resistors. The model accounts for the existence of temperature gradients due to current crowding and Joule heating. Alloying effects are also accounted for. Monte Carlo simulations allow the study within a unified theoretical framework of a variety of relevant features related to the electromigration. The predictions of the model are in excellent agreement with the experiments and in particular with the degradation towards electrical breakdown of stressed Al-Cu thin metallic lines. Detailed investigations refer to the damage pattern, the distribution of the times to failure (TTFs), the generalized Black’s law, the time evolution of the resistance, including the early-stage change due to alloying effects and the electromigration saturation appearing at low current densities or for short line lengths. The dependence of the TTFs on the length and width of the metallic line is also well reproduced. Finally, the model successfully describes the resistance noise properties under steady state conditions.

  5. [Establishment of a D-galactosamine/lipopolysaccharide induced acute-on-chronic liver failure model in rats].

    Liu, Xu-hua; Chen, Yu; Wang, Tai-ling; Lu, Jun; Zhang, Li-jie; Song, Chen-zhao; Zhang, Jing; Duan, Zhong-ping

    2007-10-01

    To establish a practical and reproducible animal model of human acute-on-chronic liver failure for further study of the pathophysiological mechanism of acute-on-chronic liver failure and for drug screening and evaluation in its treatment. Immunological hepatic fibrosis was induced by human serum albumin in Wistar rats. In rats with early-stage cirrhosis (fibrosis stage IV), D-galactosamine and lipopolysaccharide were administered. Mortality and survival time were recorded in 20 rats. Ten rats were sacrificed at 4, 8, and 12 hours. Liver function tests and plasma cytokine levels were measured after D-galactosamine/lipopolysaccharide administration and liver pathology was studied. Cell apoptosis was detected by terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling assay. Most of the rats treated with human albumin developed cirrhosis and fibrosis, and 90% of them died from acute liver failure after administration of D-galactosamine/lipopolysaccharide, with a mean survival time of (16.1+/-3.7) hours. Liver histopathology showed massive or submassive necrosis of the regenerated nodules, while fibrosis septa were intact. Liver function tests were compatible with massive necrosis of hepatocytes. Plasma level of TNFalpha increased significantly, parallel with the degree of the hepatocytes apoptosis. Plasma IL-10 levels increased similarly as seen in patients with acute-on-chronic liver failure. We established an animal model of acute-on-chronic liver failure by treating rats with human serum albumin and later with D-galactosamine and lipopolysaccharide. TNFalpha-mediated liver cell apoptoses plays a very important role in the pathogenesis of acute liver failure.

  6. Real-time adjustment of ventricular restraint therapy in heart failure.

    Ghanta, Ravi K; Lee, Lawrence S; Umakanthan, Ramanan; Laurence, Rita G; Fox, John A; Bolman, Ralph Morton; Cohn, Lawrence H; Chen, Frederick Y

    2008-12-01

    Current ventricular restraint devices do not allow for either the measurement or adjustment of ventricular restraint level. Periodic adjustment of restraint level post-device implantation may improve therapeutic efficacy. We evaluated the feasibility of an adjustable quantitative ventricular restraint (QVR) technique utilizing a fluid-filled polyurethane epicardial balloon to measure and adjust restraint level post-implantation guided by physiologic parameters. QVR balloons were implanted in nine ovine with post-infarction dilated heart failure. Restraint level was defined by the maximum restraint pressure applied by the balloon to the epicardium at end-diastole. An access line connected the balloon lumen to a subcutaneous portacath to allow percutaneous access. Restraint level was adjusted while left ventricular (LV) end-diastolic volume (EDV) and cardiac output was assessed with simultaneous transthoracic echocardiography. All nine ovine successfully underwent QVR balloon implantation. Post-implantation, restraint level could be measured percutaneously in real-time and dynamically adjusted by instillation and withdrawal of fluid from the balloon lumen. Using simultaneous echocardiography, restraint level could be adjusted based on LV EDV and cardiac output. After QVR therapy for 21 days, LV EDV decreased from 133+/-15 ml to 113+/-17 ml (p<0.05). QVR permits real-time measurement and physiologic adjustment of ventricular restraint therapy after device implantation.

  7. Multiple Indicator Stationary Time Series Models.

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  8. Fuel and coolant motions following pin failure: EPIC models and the PBE-5S experiment

    Garner, P.L.; Abramson, P.B.

    1979-01-01

    The EPIC computer code has been used to analyze the post-fuel-pin-failure behavior in the PBE-5S experiment performed at Sandia Laboratories. The effects of modeling uncertainties on the calculation are examined. The calculations indicate that the majority of the piston motion observed in the test is due to the initial pressurization of the coolant channel by fuel vapor at cladding failure. A more definitive analysis requires improvements in calculational capabilities and experiment diagnostics

  9. Effects of physical activity and sedentary time on the risk of heart failure.

    Young, Deborah Rohm; Reynolds, Kristi; Sidell, Margo; Brar, Somjot; Ghai, Nirupa R; Sternfeld, Barbara; Jacobsen, Steven J; Slezak, Jeffrey M; Caan, Bette; Quinn, Virginia P

    2014-01-01

    Although the benefits of physical activity for risk of coronary heart disease are well established, less is known about its effects on heart failure (HF). The risk of prolonged sedentary behavior on HF is unknown. The study cohort included 82 695 men aged≥45 years from the California Men's Health Study without prevalent HF who were followed up for 10 years. Physical activity, sedentary time, and behavioral covariates were obtained from questionnaires, and clinical covariates were determined from electronic medical records. Incident HF was identified through International Classification of Diseases, Ninth Revision codes recorded in electronic records. During a mean follow-up of 7.8 years (646 989 person-years), 3473 men were diagnosed with HF. Controlling for sedentary time, sociodemographics, hypertension, diabetes mellitus, unfavorable lipid levels, body mass index, smoking, and diet, the hazard ratio (95% confidence interval [CI]) of HF in the lowest physical activity category compared with those in the highest category was 1.52 (95% CI, 1.39-1.68). Those in the medium physical activity category were also at increased risk (hazard ratio, 1.17 [95% CI, 1.06-1.29]). Controlling for the same covariates and physical activity, the hazard ratio (95% CI) of HF in the highest sedentary category compared with the lowest was 1.34 (95% CI, 1.21-1.48). Medium sedentary time also conveyed risk (hazard ratio, 1.13 [95% CI, 1.04-1.24]). Results showed similar trends across white and Hispanic subgroups, body mass index categories, baseline hypertension status, and prevalent coronary heart disease. Both physical activity and sedentary time may be appropriate intervention targets for preventing HF.

  10. Pile group program for full material modeling and progressive failure.

    2008-12-01

    Strain wedge (SW) model formulation has been used, in previous work, to evaluate the response of a single pile or a group of piles (including its : pile cap) in layered soils to lateral loading. The SW model approach provides appropriate prediction f...

  11. Successes and failures of the constituent quark model

    Lipkin, H.J.

    1982-01-01

    Our approach considers the model as a possible bridge between QCD and the experimental data and examines its predictions to see where these succeed and where they fail. We also attempt to improve the model by looking for additional simple assumptions which give better fits to the experimental data. But we avoid complicated models with too many ad hoc assumptions and too many free parameters; these can fit everything but teach us nothing. We define our constituent quark model by analogy with the constituent electron model of the atom and the constituent nucleon model of the nucleus. In the same way that an atom is assumed to consist only of constituent electrons and a central Coulomb field and a nucleus is assumed to consist only of constituent nucleons hadrons are assumed to consist only of their constituent valence quarks with no bag, no glue, no ocean, nor other constituents. Although these constituent models are oversimplified and neglect other constituents we push them as far as we can. Atomic physics has photons and vacuum polarization as well as constituent electrons, but the constituent model is adequate for calculating most features of the spectrum when finer details like the Lamb shift are neglected. 54 references

  12. SWOT analysis of breach models for common dike failure mechanisms

    Peeters, P.; Van Hoestenberghe, T.; Vincke, L.; Visser, P.J.

    2011-01-01

    The use of breach models includes two tasks: predicting breach characteristics and estimating flow through the breach. Strengths and weaknesses as well as opportunities and threats of different simplified and detailed physically-based breach models are listed following theoretical and practical

  13. DCDS: A Real-time Data Capture and Personalized Decision Support System for Heart Failure Patients in Skilled Nursing Facilities.

    Zhu, Wei; Luo, Lingyun; Jain, Tarun; Boxer, Rebecca S; Cui, Licong; Zhang, Guo-Qiang

    2016-01-01

    Heart disease is the leading cause of death in the United States. Heart failure disease management can improve health outcomes for elderly community dwelling patients with heart failure. This paper describes DCDS, a real-time data capture and personalized decision support system for a Randomized Controlled Trial Investigating the Effect of a Heart Failure Disease Management Program (HF-DMP) in Skilled Nursing Facilities (SNF). SNF is a study funded by the NIH National Heart, Lung, and Blood Institute (NHLBI). The HF-DMP involves proactive weekly monitoring, evaluation, and management, following National HF Guidelines. DCDS collects a wide variety of data including 7 elements considered standard of care for patients with heart failure: documentation of left ventricular function, tracking of weight and symptoms, medication titration, discharge instructions, 7 day follow up appointment post SNF discharge and patient education. We present the design and implementation of DCDS and describe our preliminary testing results.

  14. Transit-time flow measurement as a predictor of coronary bypass graft failure at one year angiographic follow-up

    Lehnert, Per; Møller, Christian H; Damgaard, Sune

    2015-01-01

    on graft vessel type, anastomatic configuration, and coronary artery size. RESULTS: Nine hundred eighty-two coronary anastomoses were performed of which 12% had signs of graft failure at one year angiographic follow-up. In internal mammary arteries (IMAs), analysis showed a 4% decrease in graft failure......BACKGROUND: Transit-time flow measurement (TTFM) is a commonly used intraoperative method for evaluation of coronary artery bypass graft (CABG) anastomoses. This study was undertaken to determine whether TTFM can also be used to predict graft patency at one year postsurgery. METHODS: Three hundred...... forty-five CABG patients with intraoperative graft flow measurements and one year angiographic follow-up were analyzed. Graft failure was defined as more than 50% stenosis including the "string sign." Logistic regression analysis was used to analyze the risk of graft failure after one year based...

  15. NEESROCK: A Physical and Numerical Modeling Investigation of Seismically Induced Rock-Slope Failure

    Applegate, K. N.; Wartman, J.; Keefer, D. K.; Maclaughlin, M.; Adams, S.; Arnold, L.; Gibson, M.; Smith, S.

    2013-12-01

    Worldwide, seismically induced rock-slope failures have been responsible for approximately 30% of the most significant landslide catastrophes of the past century. They are among the most common, dangerous, and still today, least understood of all seismic hazards. Seismically Induced Rock-Slope Failure: Mechanisms and Prediction (NEESROCK) is a major research initiative that fully integrates physical modeling (geotechnical centrifuge) and advanced numerical simulations (discrete element modeling) to investigate the fundamental mechanisms governing the stability of rock slopes during earthquakes. The research is part of the National Science Foundation-supported Network for Earthquake Engineering Simulation Research (NEES) program. With its focus on fractures and rock materials, the project represents a significant departure from the traditional use of the geotechnical centrifuge for studying soil, and pushes the boundaries of physical modeling in new directions. In addition to advancing the fundamental understanding of the rock-slope failure process under seismic conditions, the project is developing improved rock-slope failure assessment guidelines, analysis procedures, and predictive tools. Here, we provide an overview of the project, present experimental and numerical modeling results, discuss special considerations for the use of synthetic rock materials in physical modeling, and address the suitability of discrete element modeling for simulating the dynamic rock-slope failure process.

  16. Failure analysis of parameter-induced simulation crashes in climate models

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-08-01

    Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  17. Time domain series system definition and gear set reliability modeling

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  18. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  19. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Duffy, Stephen

    2013-01-01

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  20. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  1. A model for predicting embankment slope failures in clay-rich soils; A Louisiana example

    Burns, S. F.

    2015-12-01

    A model for predicting embankment slope failures in clay-rich soils; A Louisiana example It is well known that smectite-rich soils significantly reduce the stability of slopes. The question is how much smectite in the soil causes slope failures. A study of over 100 sites in north and south Louisiana, USA, compared slopes that failed during a major El Nino winter (heavy rainfall) in 1982-1983 to similar slopes that did not fail. Soils in the slopes were tested for per cent clay, liquid limits, plasticity indices and semi-quantitative clay mineralogy. Slopes with the High Risk for failure (85-90% chance of failure in 8-15 years after construction) contained soils with a liquid limit > 54%, a plasticity index over 29%, and clay contents > 47%. Slopes with an Intermediate Risk (55-50% chance of failure in 8-15 years) contained soils with a liquid limit between 36-54%, plasticity index between 16-19%, and clay content between 32-47%. Slopes with a Low Risk chance of failure (soils with a liquid limit plasticity index soil characteristics before construction. If the soils fall into the Low Risk classification, construct the embankment normally. If the soils fall into the High Risk classification, one will need to use lime stabilization or heat treatments to prevent failures. Soils in the Intermediate Risk class will have to be evaluated on a case by case basis.

  2. Clinical risk analysis with failure mode and effect analysis (FMEA) model in a dialysis unit.

    Bonfant, Giovanna; Belfanti, Pietro; Paternoster, Giuseppe; Gabrielli, Danila; Gaiter, Alberto M; Manes, Massimo; Molino, Andrea; Pellu, Valentina; Ponzetti, Clemente; Farina, Massimo; Nebiolo, Pier E

    2010-01-01

    The aim of clinical risk management is to improve the quality of care provided by health care organizations and to assure patients' safety. Failure mode and effect analysis (FMEA) is a tool employed for clinical risk reduction. We applied FMEA to chronic hemodialysis outpatients. FMEA steps: (i) process study: we recorded phases and activities. (ii) Hazard analysis: we listed activity-related failure modes and their effects; described control measures; assigned severity, occurrence and detection scores for each failure mode and calculated the risk priority numbers (RPNs) by multiplying the 3 scores. Total RPN is calculated by adding single failure mode RPN. (iii) Planning: we performed a RPNs prioritization on a priority matrix taking into account the 3 scores, and we analyzed failure modes causes, made recommendations and planned new control measures. (iv) Monitoring: after failure mode elimination or reduction, we compared the resulting RPN with the previous one. Our failure modes with the highest RPN came from communication and organization problems. Two tools have been created to ameliorate information flow: "dialysis agenda" software and nursing datasheets. We scheduled nephrological examinations, and we changed both medical and nursing organization. Total RPN value decreased from 892 to 815 (8.6%) after reorganization. Employing FMEA, we worked on a few critical activities, and we reduced patients' clinical risk. A priority matrix also takes into account the weight of the control measures: we believe this evaluation is quick, because of simple priority selection, and that it decreases action times.

  3. Transitions of Care Between Acute and Chronic Heart Failure: Critical Steps in the Design of a Multidisciplinary Care Model for the Prevention of Rehospitalization.

    Comín-Colet, Josep; Enjuanes, Cristina; Lupón, Josep; Cainzos-Achirica, Miguel; Badosa, Neus; Verdú, José María

    2016-10-01

    Despite advances in the treatment of heart failure, mortality, the number of readmissions, and their associated health care costs are very high. Heart failure care models inspired by the chronic care model, also known as heart failure programs or heart failure units, have shown clinical benefits in high-risk patients. However, while traditional heart failure units have focused on patients detected in the outpatient phase, the increasing pressure from hospital admissions is shifting the focus of interest toward multidisciplinary programs that concentrate on transitions of care, particularly between the acute phase and the postdischarge phase. These new integrated care models for heart failure revolve around interventions at the time of transitions of care. They are multidisciplinary and patient-centered, designed to ensure continuity of care, and have been demonstrated to reduce potentially avoidable hospital admissions. Key components of these models are early intervention during the inpatient phase, discharge planning, early postdischarge review and structured follow-up, advanced transition planning, and the involvement of physicians and nurses specialized in heart failure. It is hoped that such models will be progressively implemented across the country. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  4. Patients with heart failure and their partners with chronic illness: interdependence in multiple dimensions of time.

    Nimmon, Laura; Bates, Joanna; Kimel, Gil; Lingard, Lorelei

    2018-01-01

    Informal caregivers play a vital role in supporting patients with heart failure (HF). However, when both the HF patient and their long-term partner suffer from chronic illness, they may equally suffer from diminished quality of life and poor health outcomes. With the focus on this specific couple group as a dimension of the HF health care team, we explored this neglected component of supportive care. From a large-scale Canadian multisite study, we analyzed the interview data of 13 HF patient-partner couples (26 participants). The sample consisted of patients with advanced HF and their long-term, live-in partners who also suffer from chronic illness. The analysis highlighted the profound enmeshment of the couples. The couples' interdependence was exemplified in the ways they synchronized their experience in shared dimensions of time and adapted their day-to-day routines to accommodate each other's changing health status. Particularly significant was when both individuals were too ill to perform caregiving tasks, which resulted in the couples being in a highly fragile state. We conclude that the salience of this couple group's oscillating health needs and their severe vulnerabilities need to be appreciated when designing and delivering HF team-based care.

  5. An overview of the recent advances in delay-time-based maintenance modelling

    Wang, Wenbin

    2012-01-01

    Industrial plant maintenance is an area which has enormous potential to be improved. It is also an area attracted significant attention from mathematical modellers because of the random phenomenon of plant failures. This paper reviews the recent advances in delay-time-based maintenance modelling, which is one of the mathematical techniques for optimising inspection planning and related problems. The delay-time is a concept that divides a plant failure process into two stages: from new until the point of an identifiable defect, and then from this point to failure. The first stage is called the normal working stage and the second stage is called the failure delay-time stage. If the distributions of the two stages can be quantified, the relationship between the number of failures and the inspection interval can be readily established. This can then be used for optimizing the inspection interval and other related decision variables. In this review, we pay particular attention to new methodological developments and industrial applications of the delay-time-based models over the last few decades. The use of the delay-time concept and modeling techniques in other areas rather than in maintenance is also reviewed. Future research directions are also highlighted. - Highlights: ► Reviewed the recent advances in delay-time-based maintenance models and applications. ► Compared the delay-time-based models with other models. ► Focused on methodologies and applications. ► Pointed out future research directions.

  6. Quadratic Term Structure Models in Discrete Time

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  7. Model Checking Real-Time Systems

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  8. Predictors of treatment failure and time to detection and switching in HIV-infected Ethiopian children receiving first line anti-retroviral therapy

    Bacha Tigist

    2012-08-01

    Full Text Available Abstract Background The emergence of resistance to first line antiretroviral therapy (ART regimen leads to the need for more expensive and less tolerable second line drugs. Hence, it is essential to identify and address factors associated with an increased probability of first line ART regimen failure. The objective of this article is to report on the predictors of first line ART regimen failure, the detection rate of ART regime failure, and the delay in switching to second line ART drugs. Methods A retrospective cohort study was conducted from 2005 to 2011. All HIV infected children under the age of 15 who took first line ART for at least six months at the four major hospitals of Addis Ababa, Ethiopia were included. Data were collected, entered and analyzed using Epi info/ENA version 3.5.1 and SPSS version 16. The Cox proportional-hazard model was used to assess the predictors of first line ART failure. Results Data of 1186 children were analyzed. Five hundred seventy seven (48.8% were males with a mean age of 6.22 (SD = 3.10 years. Of the 167(14.1% children who had treatment failure, 70 (5.9% had only clinical failure, 79 (6.7% had only immunologic failure, and 18 (1.5% had both clinical and immunologic failure. Patients who had height for age in the third percentile or less at initiation of ART were found to have higher probability of ART treatment failure [Adjusted Hazard Ratio (AHR, 3.25 95% CI, 1.00-10.58]. Patients who were less than three years old [AHR, 1.85 95% CI, 1.24-2.76], chronic diarrhea after initiation of antiretroviral treatment [AHR, 3.44 95% CI, 1.37-8.62], ART drug substitution [AHR, 1.70 95% CI, 1.05-2.73] and base line CD4 count below 50 cells/mm3 [AHR, 2.30 95% CI, 1.28-4.14] were also found to be at higher risk of treatment failure. Of all the 167 first line ART failure cases, only 24 (14.4% were switched to second line ART with a mean delay of 24 (SD = 11.67 months. The remaining 143 (85.6% cases were diagnosed

  9. On modelling of lateral buckling failure in flexible pipe tensile armour layers

    Østergaard, Niels Højen; Lyckegaard, Anders; Andreasen, Jens H.

    2012-01-01

    In the present paper, a mathematical model which is capable of representing the physics of lateral buckling failure in the tensile armour layers of flexible pipes is introduced. Flexible pipes are unbounded composite steel–polymer structures, which are known to be prone to lateral wire buckling...... when exposed to repeated bending cycles and longitudinal compression, which mainly occurs during pipe laying in ultra-deep waters. On the basis of multiple single wire analyses, the mechanical behaviour of both layers of tensile armour wires can be determined. Since failure in one layer destabilises...... the torsional equilibrium which is usually maintained between the layers, lateral wire buckling is often associated with a severe pipe twist. This behaviour is discussed and modelled. Results are compared to a pipe model, in which failure is assumed not to cause twist. The buckling modes of the tensile armour...

  10. ξ common cause failure model and method for defense effectiveness estimation

    Li Zhaohuan

    1991-08-01

    Two issues have been dealt. One is to develop an event based parametric model called ξ-CCF model. Its parameters are expressed in the fraction of the progressive multiplicities of failure events. By these expressions, the contribution of each multiple failure can be presented more clearly. It can help to select defense tactics against common cause failures. The other is to provide a method which is based on the operational experience and engineering judgement to estimate the effectiveness of defense tactics. It is expressed in terms of reduction matrix for a given tactics on a specific plant in the event by event form. The application of practical example shows that the model in cooperation with the method can simply estimate the effectiveness of defense tactics. It can be easily used by the operators and its application may be extended

  11. Risk stratification in middle-aged patients with congestive heart failure: prospective comparison of the Heart Failure Survival Score (HFSS) and a simplified two-variable model.

    Zugck, C; Krüger, C; Kell, R; Körber, S; Schellberg, D; Kübler, W; Haass, M

    2001-10-01

    The performance of a US-American scoring system (Heart Failure Survival Score, HFSS) was prospectively evaluated in a sample of ambulatory patients with congestive heart failure (CHF). Additionally, it was investigated whether the HFSS might be simplified by assessment of the distance ambulated during a 6-min walk test (6'WT) instead of determination of peak oxygen uptake (peak VO(2)). In 208 middle-aged CHF patients (age 54+/-10 years, 82% male, NYHA class 2.3+/-0.7; follow-up 28+/-14 months) the seven variables of the HFSS: CHF aetiology; heart rate; mean arterial pressure; serum sodium concentration; intraventricular conduction time; left ventricular ejection fraction (LVEF); and peak VO(2), were determined. Additionally, a 6'WT was performed. The HFSS allowed discrimination between patients at low, medium and high risk, with mortality rates of 16, 39 and 50%, respectively. However, the prognostic power of the HFSS was not superior to a two-variable model consisting only of LVEF and peak VO(2). The areas under the receiver operating curves (AUC) for prediction of 1-year survival were even higher for the two-variable model (0.84 vs. 0.74, P<0.05). Replacing peak VO(2) with 6'WT resulted in a similar AUC (0.83). The HFSS continued to predict survival when applied to this patient sample. However, the HFSS was inferior to a two-variable model containing only LVEF and either peak VO(2) or 6'WT. As the 6'WT requires no sophisticated equipment, a simplified two-variable model containing only LVEF and 6'WT may be more widely applicable, and is therefore recommended.

  12. Time lags in biological models

    MacDonald, Norman

    1978-01-01

    In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...

  13. Time to failure and neuromuscular response to intermittent isometric exercise at different levels of vascular occlusion: a randomized crossover study

    Mikhail Santos Cerqueira

    2017-04-01

    Full Text Available Objectives: The purpose this study was investigate the effects of different vascular occlusion levels (total occlusion (TO, partial occlusion (PO or free flow (FF during intermittent isometric handgrip exercise (IIHE on the time to failure (TF and the recovery of the maximum voluntary isometric force (MVIF, median frequency (EMGFmed and peak of EMG signal (EMGpeak after failure.  Methods: Thirteen healthy men (21 ± 1.71 year carried out an IIHE until the failure at 45% of MVIF with TO, PO or FF. Occlusion pressure was determined previously to the exercise. The MVIF, EMGFmed and EMGpeak were measured before and after exercise. Results: TF (in seconds was significantly different (p < 0.05 among all investigated conditions: TO (150 ± 68, PO (390 ± 210 and FF (510 ± 240. The MVIF was lower immediately after IIHE, remaining lower eleven minutes after failure in all cases (p <0.05, when compared to pre exercise. There was a greater force reduction (p <0.05 one minute after the failure in the PO (-45.8% and FF (-39.9% conditions, when compared to TO (-28.1%. Only the PO condition caused lower MVIF (p <0.05 than in the OT, eleven minutes after the task failure. PO caused a greater reduction in EMGFmed compared TO and greater increase in EMGpeak, when compared to TO and FF (p <0.05. Conclusions: TO during IIHE lead to a lower time to failure, but a faster MVIF recovery, while the PO seems to be associated to a slower neuromuscular recovery, when compared to other conditions.

  14. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  15. Establishment of a rat model of early-stage liver failure and Th17/Treg imbalance

    LI Dong; LU Zhonghua; GAN Jianhe

    2016-01-01

    ObjectiveTo investigate the methods for establishing a rat model of early-stage liver failure and the changes in Th17, Treg, and Th17/Treg after dexamethasone and thymosin interventions. MethodsA total of 64 rats were randomly divided into carbon tetrachloride (CCl4) group and endotoxin [lipopolysaccharide (LPS)]/D-galactosamine (D-GalN) combination group to establish the rat model of early-stage liver failure. The activities of the rats and changes in liver function and whole blood Th17 and ...

  16. On the estimation of failure rates for living PSAs in the presence of model uncertainty

    Arsenis, S.P.

    1994-01-01

    The estimation of failure rates of heterogeneous Poisson components from data on times operated to failures is reviewed. Particular emphasis is given to the lack of knowledge on the form of the mixing distribution or population variability curve. A new nonparametric epirical Bayes estimation is proposed which generalizes the estimator of Robbins for different times of observations for the components. The behavior of the estimator is discussed by reference to two samples typically drawn from the CEDB, a component event database designed and operated by the Ispra JRC

  17. A model for the computation of the thermal processes in the reactor cavity during a severe accident in a LWR, at the presence of sump water, from the time of reactor pressure vessel failure to the start time of melt/concrete interaction

    Hirschmann, H.

    1990-04-01

    At present no experimental results are available which analyze that stage of a severe accident in a light water reactor, during which the reactor pressure vessel fails by melting, the core debris relocates into the water pool on the floor of the containment building (cavity) and again is heated up. Therefore an analytical model is described, with the help of which the process of material relocation, the heating of the material in the cavity interacting with the pool water, and the production rates of vapour and hydrogen can be estimated. The slumped mass accumulating in the cavity is taken to be the sum of infinitely small mass parts, assumed to slump at different times, which after slumping undergo individual thermal histories. The enthalpy of the slumped mass is the sum of the enthalpies of the single mass parts. The average temperature of the slumped mass is given by the enthalpy computed in this manner. The production rates of the gases are additive superpositions of all partial rates from the mass parts. The gas rates are computed using the balance of enthalpy and mass. (author) 5 refs

  18. Modeling nonstationarity in space and time.

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  19. Mathematical modelling of blood-brain barrier failure and edema

    Waters, Sarah; Lang, Georgina; Vella, Dominic; Goriely, Alain

    2015-11-01

    Injuries such as traumatic brain injury and stroke can result in increased blood-brain barrier permeability. This increase may lead to water accumulation in the brain tissue resulting in vasogenic edema. Although the initial injury may be localised, the resulting edema causes mechanical damage and compression of the vasculature beyond the original injury site. We employ a biphasic mixture model to investigate the consequences of blood-brain barrier permeability changes within a region of brain tissue and the onset of vasogenic edema. We find that such localised changes can indeed result in brain tissue swelling and that the type of damage that results (stress damage or strain damage) depends on the ability of the brain to clear edema fluid.

  20. An analytical model for the ductile failure of biaxially loaded type 316 stainless steel subjected to thermal transients

    Dimelfi, R.J.

    1987-01-01

    Failure properties are calculated for the case of biaxially loaded type 316 stainless steel tubes that are heated from 300 K to near melting at various constant rates. The procedure involves combining a steady state plastic-deformation rate law with a strain hardening equation. Integrating under the condition of plastic instability gives the time and plastic strain at which ductile failure occurs for a given load. The result is presented as an analytical expression for equivalent plastic strain as a function of equivalent stress, temperature, heating rate and material constants. At large initial load, ductile fracture is calculated to occur early, at low temperatures, after very little deformation. At very small loads deformation continues for a long time to high temperatures where creep rupture mechanisms limit ductility. In the case of intermediate loads, the plastic strain accumulated before the occurrence of unstable ductile fracture is calculated. Comparison of calculated results is made with existing experimental data from pressurized tubes heated at 5.6 K/s and 111 K/s. When the effect of grain growth on creep ductility is taken into account from recrystallization data, agreement between measured and calculated uniform ductility is excellent. The general reduction in ductility and failure time that is observed at higher heating rate is explained via the model. The model provides an analytical expression for the ductility and failure time during transients for biaxially loaded type 316 stainless steel as a function of the initial temperature and load, as well as the material creep and strain hardening parameters. (orig.)

  1. 20 CFR 404.1267 - Failure to make timely payments-for wages paid prior to 1987.

    2010-04-01

    ... Governments If A State Fails to Make Timely Payments-for Wages Paid Prior to 1987 § 404.1267 Failure to make... to the State under the other provision of the Social Security Act. [53 FR 32976, Aug. 29, 1988, as... paid prior to 1987. 404.1267 Section 404.1267 Employees' Benefits SOCIAL SECURITY ADMINISTRATION...

  2. Ergodicity of forward times of the renewal process in a block-based inspection model using the delay time concept

    Wang Wenbin; Banjevic, Dragan

    2012-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practice have been reported in many papers and case studies. For a system subject to a few major failure modes, component based delay time models have been developed under the assumptions of an age-based inspection policy. An age-based inspection assumes that an inspection is scheduled according to the age of the component, and if there is a failure renewal, the next inspection is always, say τ times, from the time of the failure renewal. This applies to certain cases, particularly important plant items where the time since the last renewal or inspection is a key to schedule the next inspection service. However, in most cases, the inspection service is not scheduled according to the need of a particular component, rather it is scheduled according to a fixed calendar time regardless whether the component being inspected was just renewed or not. This policy is called a block-based inspection which has the advantage of easy planning and is particularly useful for plant items which are part of a larger system to be inspected. If a block-based inspection policy is used, the time to failure since the last inspection prior to the failure for a particular item is a random variable. This time is called the forward time in this paper. To optimise the inspection interval for block-based inspections, the usual criterion functions such as expected cost or down time per unit time depend on the distribution of this forward time. We report in this paper the development of a theoretical proof that a limiting distribution for such a forward time exists if certain conditions are met. We also propose a recursive algorithm for determining such a limiting distribution. A numerical example is presented to demonstrate the existence of the limiting distribution.

  3. Optimizing Nutrition in Pediatric Heart Failure: The Crisis Is Over and Now It's Time to Feed.

    Lewis, Kylie D; Conway, Jennifer; Cunningham, Chentel; Larsen, Bodil M K

    2017-06-01

    Pediatric heart failure is a complex disease occurring when cardiac output is unable to meet the metabolic demands of the body. With improved surgical interventions and medical therapies, survival rates have improved, and care has shifted from focusing on survival to optimizing quality of life and health outcomes. Based on current literature, this review addresses the nutrition needs of infants and children in heart failure and describes the pathophysiology and metabolic implications of this disease. The prevalence of wasting in pediatric heart failure has been reported to be as high as 86%, highlighting the importance of nutrition assessment through all stages of treatment to provide appropriate intake of energy, protein, and micronutrients. The etiology of malnutrition in pediatric heart failure is multifactorial and involves hypermetabolism, decreased intake, increased nutrient losses, inefficient utilization of nutrients, and malabsorption. Children in heart failure often present with tachypnea, tachycardia, fatigue, nausea, and vomiting and consequently may not be able to meet their nutrition requirements through oral intake alone. Nutrition support, including enteral nutrition and parenteral nutrition, should be considered an essential part of routine care. The involvement of multiple allied health professionals may be needed to create a feeding therapy plan to support patients and their families. With appropriate nutrition interventions, clinical outcomes and quality of life can be significantly improved.

  4. Load-redistribution strategy based on time-varying load against cascading failure of complex network

    Liu Jun; Shi Xin; Wang Kai; Shi Wei-Ren; Xiong Qing-Yu

    2015-01-01

    Cascading failure can cause great damage to complex networks, so it is of great significance to improve the network robustness against cascading failure. Many previous existing works on load-redistribution strategies require global information, which is not suitable for large scale networks, and some strategies based on local information assume that the load of a node is always its initial load before the network is attacked, and the load of the failure node is redistributed to its neighbors according to their initial load or initial residual capacity. This paper proposes a new load-redistribution strategy based on local information considering an ever-changing load. It redistributes the loads of the failure node to its nearest neighbors according to their current residual capacity, which makes full use of the residual capacity of the network. Experiments are conducted on two typical networks and two real networks, and the experimental results show that the new load-redistribution strategy can reduce the size of cascading failure efficiently. (paper)

  5. Economic sustainability in franchising: a model to predict franchisor success or failure

    Calderón Monge, Esther; Pastor Sanz, Ivan .; Huerta Zavala, Pilar Angélica

    2017-01-01

    As a business model, franchising makes a major contribution to gross domestic product (GDP). A model that predicts franchisor success or failure is therefore necessary to ensure economic sustainability. In this study, such a model was developed by applying Lasso regression to a sample of franchises operating between 2002 and 2013. For franchises with the highest likelihood of survival, the franchise fees and the ratio of company-owned to franchised outlets were suited to the age ...

  6. Do telemonitoring projects of heart failure fit the Chronic Care Model?

    Willemse, Evi; Adriaenssens, Jef; Dilles, Tinne; Remmen, Roy

    2014-07-01

    This study describes the characteristics of extramural and transmural telemonitoring projects on chronic heart failure in Belgium. It describes to what extent these telemonitoring projects coincide with the Chronic Care Model of Wagner. The Chronic Care Model describes essential components for high-quality health care. Telemonitoring can be used to optimise home care for chronic heart failure. It provides a potential prospective to change the current care organisation. This qualitative study describes seven non-invasive home-care telemonitoring projects in patients with heart failure in Belgium. A qualitative design, including interviews and literature review, was used to describe the correspondence of these home-care telemonitoring projects with the dimensions of the Chronic Care Model. The projects were situated in primary and secondary health care. Their primary goal was to reduce the number of readmissions for chronic heart failure. None of these projects succeeded in a final implementation of telemonitoring in home care after the pilot phase. Not all the projects were initiated to accomplish all of the dimensions of the Chronic Care Model. A central role for the patient was sparse. Limited financial resources hampered continuation after the pilot phase. Cooperation and coordination in telemonitoring appears to be major barriers but are, within primary care as well as between the lines of care, important links in follow-up. This discrepancy can be prohibitive for deployment of good chronic care. Chronic Care Model is recommended as basis for future.

  7. Identification of Modeling Approaches To Support Common-Cause Failure Analysis

    Korsah, Kofi; Wood, Richard Thomas

    2015-01-01

    Experience with applying current guidance and practices for common-cause failure (CCF) mitigation to digital instrumentation and control (I&C) systems has proven problematic, and the regulatory environment has been unpredictable. The impact of CCF vulnerability is to inhibit I&C modernization and, thereby, challenge the long-term sustainability of existing plants. For new plants and advanced reactor concepts, the issue of CCF vulnerability for highly integrated digital I&C systems imposes a design burden resulting in higher costs and increased complexity. The regulatory uncertainty regarding which mitigation strategies are acceptable (e.g., what diversity is needed and how much is sufficient) drives designers to adopt complicated, costly solutions devised for existing plants. The conditions that constrain the transition to digital I&C technology by the U.S. nuclear industry require crosscutting research to resolve uncertainty, demonstrate necessary characteristics, and establish an objective basis for qualification of digital technology for usage in Nuclear Power Plant (NPP) I&C applications. To fulfill this research need, Oak Ridge National Laboratory is conducting an investigation into mitigation of CCF vulnerability for nuclear-qualified applications. The outcome of this research is expected to contribute to a fundamentally sound, comprehensive technical basis for establishing the qualification of digital technology for nuclear power applications. This report documents the investigation of modeling approaches for representing failure of I&C systems. Failure models are used when there is a need to analyze how the probability of success (or failure) of a system depends on the success (or failure) of individual elements. If these failure models are extensible to represent CCF, then they can be employed to support analysis of CCF vulnerabilities and mitigation strategies. Specifically, the research findings documented in this report identify modeling approaches that

  8. Identification of Modeling Approaches To Support Common-Cause Failure Analysis

    Korsah, Kofi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wood, Richard Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-06-01

    Experience with applying current guidance and practices for common-cause failure (CCF) mitigation to digital instrumentation and control (I&C) systems has proven problematic, and the regulatory environment has been unpredictable. The impact of CCF vulnerability is to inhibit I&C modernization and, thereby, challenge the long-term sustainability of existing plants. For new plants and advanced reactor concepts, the issue of CCF vulnerability for highly integrated digital I&C systems imposes a design burden resulting in higher costs and increased complexity. The regulatory uncertainty regarding which mitigation strategies are acceptable (e.g., what diversity is needed and how much is sufficient) drives designers to adopt complicated, costly solutions devised for existing plants. The conditions that constrain the transition to digital I&C technology by the U.S. nuclear industry require crosscutting research to resolve uncertainty, demonstrate necessary characteristics, and establish an objective basis for qualification of digital technology for usage in Nuclear Power Plant (NPP) I&C applications. To fulfill this research need, Oak Ridge National Laboratory is conducting an investigation into mitigation of CCF vulnerability for nuclear-qualified applications. The outcome of this research is expected to contribute to a fundamentally sound, comprehensive technical basis for establishing the qualification of digital technology for nuclear power applications. This report documents the investigation of modeling approaches for representing failure of I&C systems. Failure models are used when there is a need to analyze how the probability of success (or failure) of a system depends on the success (or failure) of individual elements. If these failure models are extensible to represent CCF, then they can be employed to support analysis of CCF vulnerabilities and mitigation strategies. Specifically, the research findings documented in this report identify modeling approaches that

  9. Left ventricular fluid dynamics in heart failure: echocardiographic measurement and utilities of vortex formation time.

    Poh, Kian Keong; Lee, Li Ching; Shen, Liang; Chong, Eric; Tan, Yee Leng; Chai, Ping; Yeo, Tiong Cheng; Wood, Malissa J

    2012-05-01

    In clinical heart failure (HF), inefficient propagation of blood through the left ventricle (LV) may result from suboptimal vortex formation (VF) ability of the LV during early diastole. We aim to (i) validate echocardiographic-derived vortex formation time (adapted) (VFTa) in control subjects and (ii) examine its utility in both systolic and diastolic HF. Transthoracic echocardiography was performed in 32 normal subjects and in 130 patients who were hospitalized with HF [91, reduced ejection fraction (rEF) and 39, preserved ejection fraction (pEF)]. In addition to biplane left ventricular ejection fraction (LVEF) and conventional parameters, the Tei index and tissue Doppler (TD) indices were measured. VFTa was obtained using the formula: 4 × (1 - β)/π × α³ × LVEF, where β is the fraction of total transmitral diastolic stroke volume contributed by atrial contraction (assessed by time velocity integral of the mitral E- and A-waves) and α is the biplane end-diastolic volume (EDV)(1/3) divided by mitral annular diameter during early diastole. VFTa was correlated with demographic, cardiac parameters, and a composite clinical endpoint comprising cardiac death and repeat hospitalization for HF. Mean VFTa was 2.67 ± 0.8 in control subjects; reduced in HF, preserved EF HF, 2.21 ± 0.8; HF with reduced EF, 1.25 ± 0.6 (PTD early diastolic myocardial velocities (E', septal, r = 0.46; lateral, r = 0.43), systolic myocardial velocities (S', septal, r = 0.47; lateral, r = 0.41), and inversely with the Tei index (r = -0.41); all Ps < 0.001. Sixty-two HF patients (49%) met the composite endpoint. VFTa of <1.32 was associated with significantly reduced event-free survival (Kaplan Meier log rank = 16.3, P= 0.0001) and predicted the endpoint with a sensitivity and specificity of 65 and 72%, respectively. VFTa, a dimensionless index, incorporating LV geometry, systolic and diastolic parameters, may be useful in the diagnosis and prognosis of HF.

  10. Application of different failure criteria in fuel pin modelling and consequences for overpower transients in LMFBRs

    Kuczera, B.; Royl, P.

    1975-01-01

    The CAPRI-2 code system for analysis of hypothetical core disruptive accidents in LMFBRs has recently been coupled with the transient deformation model BREDA-2. The new code system determines thermal and mechanical loads under transient conditions for both, fresh and irradiated fuel and cladding, taking into account fuel restructuring as well as effects from fission gas and fuel and clad swelling. The system has been used for analysis of mild uncontrolled overpower transients in the SNR-300 to predict failure, and to initialize and calculate subsequent fuel coolant interaction (FCI). Thirteen channels have been coupled by point kinetics for the whole core analysis. Three different failure mechanisms and their influence on accident sequence have been investigated: clad melt-through; clad burst caused by internal pressure build-up; clad straining due to differential thermal expansion between fuel and clad cylinders. The results of these analyses show that each failure mechanism will lead to rather different failure and accident sequences. There is still a lack of experimental data from which failure thresholds can be derived. To get better predictions from the applied models an improved understanding of fission release and its relation to fuel porosity also some better experimental data on fluence and temperature dependent rupture strains of the cladding material should be available

  11. Parabens Accelerate Ovarian Dysfunction in a 4-Vinylcyclohexene Diepoxide-Induced Ovarian Failure Model

    Jae-Hwan Lee

    2017-02-01

    Full Text Available Parabens are widely used preservatives in basic necessities such as cosmetic and pharmaceutical products. In previous studies, xenoestrogenic actions of parabens were reported in an immature rat model and a rat pituitary cell line (GH3 cells. The relationship between parabens and ovarian failure has not been described. In the present study, the influence of parabens on ovarian folliculogenesis and steroidogenesis was investigated. A disruptor of ovarian small pre-antral follicles, 4-vinylcyclohexene diepoxide (VCD, 40 mg/kg, was used to induce premature ovarian failure (POF. Methylparaben (MP, 100 mg/kg, propylparaben (PP, 100 mg/kg, and butylparaben (BP, 100 mg/kg dissolved in corn oil were treated in female 8-week-old Sprague-Dawley rat for 5 weeks. Estrus cycle status was checked daily by vaginal smear test. Ovarian follicle development and steroid synthesis were investigated through real-time PCR and histological analyses. Diestrus phases in the VCD, PP, and BP groups were longer than that in the vehicle group. VCD significantly decreased mRNA level of folliculogenesis-related genes (Foxl2, Kitl and Amh. All parabens significantly increased the Amh mRNA level but unchanged Foxl2 and Kitlg acting in primordial follicles. VCD and MP slightly increased Star and Cyp11a1 levels, which are related to an initial step in steroidogenesis. VCD and parabens induced an increase in FSH levels in serum and significantly decreased the total number of follicles. Increased FSH implies impairment in ovarian function due to VCD or parabens. These results suggest that VCD may suppress both formation and development of follicles. In particular, combined administration of VCD and parabens accelerated inhibition of the follicle-developmental process through elevated AMH level in small antral follicles.

  12. The impact of the time interval on in-vitro fertilisation success after failure of the first attempt.

    Bayoglu Tekin, Y; Ceyhan, S T; Kilic, S; Korkmaz, C

    2015-05-01

    The aim of this study was to identify the optimal time interval for in-vitro fertilisation that would increase treatment success after failure of the first attempt. This retrospective study evaluated 454 consecutive cycles of 227 infertile women who had two consecutive attempts within a 6-month period at an IVF centre. Data were collected on duration of stimulation, consumption of gonadotropin, numbers of retrieved oocytes, mature oocytes, fertilised eggs, good quality embryos on day 3/5 following oocyte retrieval and clinical and ongoing pregnancy. There were significant increases in clinical pregnancy rates at 2-, 3- and 4-month intervals. The maximum increase was after two menstrual cycles (p = 0.001). The highest rate of ongoing pregnancy was in women that had the second attempt after the next menstrual cycle following failure of IVF (27.2%). After IVF failure, initiating the next attempt within 2-4 months increases the clinical pregnancy rates.

  13. Bootstrapping a time series model

    Son, M.S.

    1984-01-01

    The bootstrap is a methodology for estimating standard errors. The idea is to use a Monte Carlo simulation experiment based on a nonparametric estimate of the error distribution. The main objective of this dissertation was to demonstrate the use of the bootstrap to attach standard errors to coefficient estimates and multi-period forecasts in a second-order autoregressive model fitted by least squares and maximum likelihood estimation. A secondary objective of this article was to present the bootstrap in the context of two econometric equations describing the unemployment rate and individual income tax in the state of Oklahoma. As it turns out, the conventional asymptotic formulae (both the least squares and maximum likelihood estimates) for estimating standard errors appear to overestimate the true standard errors. But there are two problems: 1) the first two observations y 1 and y 2 have been fixed, and 2) the residuals have not been inflated. After these two factors are considered in the trial and bootstrap experiment, both the conventional maximum likelihood and bootstrap estimates of the standard errors appear to be performing quite well. At present, there does not seem to be a good rule of thumb for deciding when the conventional asymptotic formulae will give acceptable results

  14. Lag space estimation in time series modelling

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  15. Time-Weighted Balanced Stochastic Model Reduction

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  16. Age and admission times as predictive factors for failure of admissions to discharge-stream short-stay units.

    Shetty, Amith L; Shankar Raju, Savitha Banagar; Hermiz, Arsalan; Vaghasiya, Milan; Vukasovic, Matthew

    2015-02-01

    Discharge-stream emergency short-stay units (ESSU) improve ED and hospital efficiency. Age of patients and time of hospital presentations have been shown to correlate with increasing complexity of care. We aim to determine whether an age and time cut-off could be derived to subsequently improve short-stay unit success rates. We conducted a retrospective audit on 6703 (5522 inclusions) patients admitted to our discharge-stream short-stay unit. Patients were classified as appropriate or inappropriate admissions, and deemed successful if discharged out of the unit within 24 h; and failures if they needed inpatient admission into the hospital. We calculated short-stay unit length of stay for patients in each of these groups. A 15% failure rate was deemed as acceptable key performance indicator (KPI) for our unit. There were 197 out of 4621 (4.3%, 95% CI 3.7-4.9%) patients up to the age of 70 who failed admission to ESSU compared with 67 out of 901 (7.4%, 95% CI 5.9-9.3%, P 70 years of age have higher rates of failure after admission to discharge-stream ESSU. Although in appropriately selected discharge-stream patients, no age group or time-band of presentation was associated with increased failure rate beyond the stipulated KPI. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  17. Reducing failures rate within the project documentation using Building Information Modelling, especially Level of Development

    Prušková Kristýna

    2018-01-01

    Full Text Available Paper´s focus is on differences between traditional modelling in 2D software and modelling within the BIM technology. Research uncovers failures connected to the traditional way of designing and construction of project documentation. There are revealed and shown mismatches within the project documentation. Solution within the Building information modelling Technology is outlined. As a reference, there is used experience with design of specific building in both ways of construction of project documentation: in the way of traditional modelling and in the way when using BIM technology, especially using Level of Development. Output of this paper is pointing to benefits of using advanced technology in building design, thus Building Information Modelling, especially Level of Development, which leads to reducing failures rate within the project documentation.

  18. Conduit Stability and Collapse in Explosive Volcanic Eruptions: Coupling Conduit Flow and Failure Models

    Mullet, B.; Segall, P.

    2017-12-01

    Explosive volcanic eruptions can exhibit abrupt changes in physical behavior. In the most extreme cases, high rates of mass discharge are interspaced by dramatic drops in activity and periods of quiescence. Simple models predict exponential decay in magma chamber pressure, leading to a gradual tapering of eruptive flux. Abrupt changes in eruptive flux therefore indicate that relief of chamber pressure cannot be the only control of the evolution of such eruptions. We present a simplified physics-based model of conduit flow during an explosive volcanic eruption that attempts to predict stress-induced conduit collapse linked to co-eruptive pressure loss. The model couples a simple two phase (gas-melt) 1-D conduit solution of the continuity and momentum equations with a Mohr-Coulomb failure condition for the conduit wall rock. First order models of volatile exsolution (i.e. phase mass transfer) and fragmentation are incorporated. The interphase interaction force changes dramatically between flow regimes, so smoothing of this force is critical for realistic results. Reductions in the interphase force lead to significant relative phase velocities, highlighting the deficiency of homogenous flow models. Lateral gas loss through conduit walls is incorporated using a membrane-diffusion model with depth dependent wall rock permeability. Rapid eruptive flux results in a decrease of chamber and conduit pressure, which leads to a critical deviatoric stress condition at the conduit wall. Analogous stress distributions have been analyzed for wellbores, where much work has been directed at determining conditions that lead to wellbore failure using Mohr-Coulomb failure theory. We extend this framework to cylindrical volcanic conduits, where large deviatoric stresses can develop co-eruptively leading to multiple distinct failure regimes depending on principal stress orientations. These failure regimes are categorized and possible implications for conduit flow are discussed, including

  19. Modeling tumor control probability for spatially inhomogeneous risk of failure based on clinical outcome data

    Lühr, Armin; Löck, Steffen; Jakobi, Annika

    2017-01-01

    PURPOSE: Objectives of this work are (1) to derive a general clinically relevant approach to model tumor control probability (TCP) for spatially variable risk of failure and (2) to demonstrate its applicability by estimating TCP for patients planned for photon and proton irradiation. METHODS AND ...

  20. Early Treatment Outcome in Failure to Thrive: Predictions from a Transactional Model.

    Drotar, Dennis

    Children diagnosed with environmentally based failure to thrive early during their first year of life were seen at 12 and 18 months for assessment of psychological development (cognition, language, symbolic play, and behavior during testing). Based on a transactional model of outcome, factors reflecting biological vulnerability (wasting and…

  1. Gravity-driven groundwater flow and slope failure potential: 1. Elastic effective-stress model

    Iverson, Richard M.; Reid, Mark E.

    1992-01-01

    Hilly or mountainous topography influences gravity-driven groundwater flow and the consequent distribution of effective stress in shallow subsurface environments. Effective stress, in turn, influences the potential for slope failure. To evaluate these influences, we formulate a two-dimensional, steady state, poroelastic model. The governing equations incorporate groundwater effects as body forces, and they demonstrate that spatially uniform pore pressure changes do not influence effective stresses. We implement the model using two finite element codes. As an illustrative case, we calculate the groundwater flow field, total body force field, and effective stress field in a straight, homogeneous hillslope. The total body force and effective stress fields show that groundwater flow can influence shear stresses as well as effective normal stresses. In most parts of the hillslope, groundwater flow significantly increases the Coulomb failure potential Φ, which we define as the ratio of maximum shear stress to mean effective normal stress. Groundwater flow also shifts the locus of greatest failure potential toward the slope toe. However, the effects of groundwater flow on failure potential are less pronounced than might be anticipated on the basis of a simpler, one-dimensional, limit equilibrium analysis. This is a consequence of continuity, compatibility, and boundary constraints on the two-dimensional flow and stress fields, and it points to important differences between our elastic continuum model and limit equilibrium models commonly used to assess slope stability.

  2. Continuum Damage Mechanics Models for the Analysis of Progressive Failure in Open-Hole Tension Laminates

    Song, Kyonchan; Li, Yingyong; Rose, Cheryl A.

    2011-01-01

    The performance of a state-of-the-art continuum damage mechanics model for interlaminar damage, coupled with a cohesive zone model for delamination is examined for failure prediction of quasi-isotropic open-hole tension laminates. Limitations of continuum representations of intra-ply damage and the effect of mesh orientation on the analysis predictions are discussed. It is shown that accurate prediction of matrix crack paths and stress redistribution after cracking requires a mesh aligned with the fiber orientation. Based on these results, an aligned mesh is proposed for analysis of the open-hole tension specimens consisting of different meshes within the individual plies, such that the element edges are aligned with the ply fiber direction. The modeling approach is assessed by comparison of analysis predictions to experimental data for specimen configurations in which failure is dominated by complex interactions between matrix cracks and delaminations. It is shown that the different failure mechanisms observed in the tests are well predicted. In addition, the modeling approach is demonstrated to predict proper trends in the effect of scaling on strength and failure mechanisms of quasi-isotropic open-hole tension laminates.

  3. Assessment of compressive failure process of cortical bone materials using damage-based model.

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Developing a Model for Identifying Students at Risk of Failure in a First Year Accounting Unit

    Smith, Malcolm; Therry, Len; Whale, Jacqui

    2012-01-01

    This paper reports on the process involved in attempting to build a predictive model capable of identifying students at risk of failure in a first year accounting unit in an Australian university. Identifying attributes that contribute to students being at risk can lead to the development of appropriate intervention strategies and support…

  5. Development and validation of multivariable models to predict mortality and hospitalization in patients with heart failure

    Voors, Adriaan A.; Ouwerkerk, Wouter; Zannad, Faiez; van Veldhuisen, Dirk J.; Samani, Nilesh J.; Ponikowski, Piotr; Ng, Leong L.; Metra, Marco; ter Maaten, Jozine M.; Lang, Chim C.; Hillege, Hans L.; van der Harst, Pim; Filippatos, Gerasimos; Dickstein, Kenneth; Cleland, John G.; Anker, Stefan D.; Zwinderman, Aeilko H.

    Introduction From a prospective multicentre multicountry clinical trial, we developed and validated risk models to predict prospective all-cause mortality and hospitalizations because of heart failure (HF) in patients with HF. Methods and results BIOSTAT-CHF is a research programme designed to

  6. Development and validation of multivariable models to predict mortality and hospitalization in patients with heart failure

    Voors, Adriaan A.; Ouwerkerk, Wouter; Zannad, Faiez; van Veldhuisen, Dirk J.; Samani, Nilesh J.; Ponikowski, Piotr; Ng, Leong L.; Metra, Marco; ter Maaten, Jozine M.; Lang, Chim C.; Hillege, Hans L.; van der Harst, Pim; Filippatos, Gerasimos; Dickstein, Kenneth; Cleland, John G.; Anker, Stefan D.; Zwinderman, Aeilko H.

    2017-01-01

    Introduction From a prospective multicentre multicountry clinical trial, we developed and validated risk models to predict prospective all-cause mortality and hospitalizations because of heart failure (HF) in patients with HF. Methods and results BIOSTAT-CHF is a research programme designed to

  7. 3D constitutive model of anisotropic damage for unidirectional ply based on physical failure mechanisms

    Qing, Hai; Mishnaevsky, Leon

    2010-01-01

    in a computational finite element framework, which is capable of predicting initial failure, subsequent progressive damage up to final collapse. Crack band model and viscous regularization are applied to depress the convergence difficulties associated with strain softening behaviours. To verify the accuracy...

  8. Modelling the impact of creep on the probability of failure of a solid oxidefuel cell stack

    Greco, Fabio; Frandsen, Henrik Lund; Nakajo, Arata

    2014-01-01

    In solid oxide fuel cell (SOFC) technology a major challenge lies in balancing thermal stresses from an inevitable thermal field. The cells are known to creep, changing over time the stress field. The main objective of this study was to assess the influence of creep on the failure probability of ...

  9. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    Dongiovanni, Danilo Nicola, E-mail: danilo.dongiovanni@enea.it [ENEA, Nuclear Fusion and Safety Technologies Department, via Enrico Fermi 45, Frascati 00040 (Italy); Iesmantas, Tomas [LEI, Breslaujos str. 3 Kaunas (Lithuania)

    2016-11-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  10. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    Dongiovanni, Danilo Nicola; Iesmantas, Tomas

    2016-01-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  11. Formal Modeling and Analysis of Timed Systems

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...

  12. Time to exacerbation of heart failure is longer in Malaysian population on dipeptidyl peptidase-4 inhibitor

    J Hasan

    2017-01-01

    Conclusions: Higher CV events were seen in diabetic patients with known CAD treated with DPP4i between 20 and 30 weeks of therapy and occurred earlier in patients with chronic kidney disease. This is later than published data and raises the need to monitor this group of patients for symptoms of heart failure beyond conventional monitoring.

  13. Sacubitril/Valsartan for Heart Failure: Exciting Times but Are Doctors Informed and Ready?

    Prithwish Banerjee

    2017-01-01

    Full Text Available Sacubitril/Valsartan in now being prescribed by heart failure/cardiology teams across the United Kingdom following the publication of the NICE technology appraisal guidance, but is everyone ready for it? This article discusses the practical aspects of what to do and what not to do in relation to the drug, based on real world experience from our centre.

  14. Failure? Isn't It Time to Slay the Design-Dragon?

    Winkler, Dietmar R.

    2009-01-01

    There is a closed cycle of design education that replicates the most common design practice--and feeds into that practice that seeks awards based on incremental change supported by professional organizations and trade journals--that feeds back to education forms for imitation. This is the educational failure this paper cites. It takes to task the…

  15. Prognostic importance of a short deceleration time in symptomatic congestive heart failure

    Akkan, Dilek; Kjaergaard, Jesper; Møller, Jacob Eifer

    2008-01-01

    AIMS: A restrictive transmitral filling (RF) pattern predicts increased mortality in heart failure (HF) with reduced left ventricular (LV) systolic function. We performed a combined evaluation of LV function and RF for prognosis in patients with HF with and without systolic dysfunction. METHODS...

  16. Information Technology Management System: an Analysis on Computational Model Failures for Fleet Management

    Jayr Figueiredo de Oliveira

    2013-10-01

    Full Text Available This article proposes an information technology model to evaluate fleet management failure. Qualitative research done by a case study within an Interstate Transport company in a São Paulo State proposed to establish a relationship between computer tools and valid trustworthy information needs, and within an acceptable timeframe, for decision making, reliability, availability and system management. Additionally, the study aimed to provide relevant and precise information, in order to minimize and mitigate failure actions that may occur, compromising all operational organization base functioning.

  17. Experiments and modeling of ballistic penetration using an energy failure criterion

    Dolinski M.

    2015-01-01

    Full Text Available One of the most intricate problems in terminal ballistics is the physics underlying penetration and perforation. Several penetration modes are well identified, such as petalling, plugging, spall failure and fragmentation (Sedgwick, 1968. In most cases, the final target failure will combine those modes. Some of the failure modes can be due to brittle material behavior, but penetration of ductile targets by blunt projectiles, involving plugging in particular, is caused by excessive localized plasticity, with emphasis on adiabatic shear banding (ASB. Among the theories regarding the onset of ASB, new evidence was recently brought by Rittel et al. (2006, according to whom shear bands initiate as a result of dynamic recrystallization (DRX, a local softening mechanism driven by the stored energy of cold work. As such, ASB formation results from microstructural transformations, rather than from thermal softening. In our previous work (Dolinski et al., 2010, a failure criterion based on plastic strain energy density was presented and applied to model four different classical examples of dynamic failure involving ASB formation. According to this criterion, a material point starts to fail when the total plastic strain energy density reaches a critical value. Thereafter, the strength of the element decreases gradually to zero to mimic the actual material mechanical behavior. The goal of this paper is to present a new combined experimental-numerical study of ballistic penetration and perforation, using the above-mentioned failure criterion. Careful experiments are carried out using a single combination of AISI 4340 FSP projectiles and 25[mm] thick RHA steel plates, while the impact velocity, and hence the imparted damage, are systematically varied. We show that our failure model, which includes only one adjustable parameter in this present work, can faithfully reproduce each of the experiments without any further adjustment. Moreover, it is shown that the

  18. Prognosis and serum creatinine levels in acute renal failure at the time of nephrology consultation: an observational cohort study

    de Irala Jokin

    2007-09-01

    Full Text Available Abstract Background The aim of this study is to evaluate the association between acute serum creatinine changes in acute renal failure (ARF, before specialized treatment begins, and in-hospital mortality, recovery of renal function, and overall mortality at 6 months, on an equal degree of ARF severity, using the RIFLE criteria, and comorbid illnesses. Methods Prospective cohort study of 1008 consecutive patients who had been diagnosed as having ARF, and had been admitted in an university-affiliated hospital over 10 years. Demographic, clinical information and outcomes were measured. After that, 646 patients who had presented enough increment in serum creatinine to qualify for the RIFLE criteria were included for subsequent analysis. The population was divided into two groups using the median serum creatinine change (101% as the cut-off value. Multivariate non-conditional logistic and linear regression models were used. Results A ≥ 101% increment of creatinine respect to its baseline before nephrology consultation was associated with significant increase of in-hospital mortality (35.6% vs. 22.6%, p Conclusion In this cohort, patients who had presented an increment in serum level of creatinine of ≥ 101% with respect to basal values, at the time of nephrology consultation, had increased mortality rates and were discharged from hospital with a more deteriorated renal function than those with similar Liano scoring and the same RIFLE classes, but with a

  19. Failure detection by adaptive lattice modelling using Kalman filtering methodology : application to NPP

    Ciftcioglu, O.

    1991-03-01

    Detection of failure in the operational status of a NPP is described. The method uses lattice form of the signal modelling established by means of Kalman filtering methodology. In this approach each lattice parameter is considered to be a state and the minimum variance estimate of the states is performed adaptively by optimal parameter estimation together with fast convergence and favourable statistical properties. In particular, the state covariance is also the covariance of the error committed by that estimate of the state value and the Mahalanobis distance formed for pattern comparison takes x 2 distribution for normally distributed signals. The failure detection is performed after a decision making process by probabilistic assessments based on the statistical information provided. The failure detection system is implemented in multi-channel signal environment of Borssele NPP and its favourable features are demonstrated. (author). 29 refs.; 7 figs

  20. Development of a Zircaloy creep and failure model for LOCA conditions

    Raff, S.; Meyder, R.

    1981-01-01

    The present status of NORA model for zircaloy-4 creep and failure in the high temperature region (from 600 deg C up to 1200 deg C) is described. Temperature dependence, strain hardening and oxygen content are found to be the most important features of the strain rate creep equation. The failure criterion is based on a modified strain fraction rule. Variables of this criterion are temperature, strain rate or applied stress respectively and oxygen content. Concerning the application of the deformation model, deduced from uniaxial tests, to tube deformation calculation the axial ballooning shape has to be taken into account. Its influence on the tube stress components and therefore on strain rate is discussed. A further improvement of the deformation model concerning yield drop and irregular creep behaviour aims at the enlargement of the range of applicability and reduction of the error band of the model

  1. Probabilistic Modeling of Updating Epistemic Uncertainty In Pile Capacity Prediction With a Single Failure Test Result

    Indra Djati Sidi

    2017-12-01

    Full Text Available The model error N has been introduced to denote the discrepancy between measured and predicted capacity of pile foundation. This model error is recognized as epistemic uncertainty in pile capacity prediction. The statistics of N have been evaluated based on data gathered from various sites and may be considered only as a eneral-error trend in capacity prediction, providing crude estimates of the model error in the absence of more specific data from the site. The results of even a single load test to failure, should provide direct evidence of the pile capacity at a given site. Bayes theorem has been used as a rational basis for combining new data with previous data to revise assessment of uncertainty and reliability. This study is devoted to the development of procedures for updating model error (N, and subsequently the predicted pile capacity with a results of single failure test.

  2. Time series modeling, computation, and inference

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  3. Failure prediction of low-carbon steel pressure vessel and cylindrical models

    Zhang, K.D.; Wang, W.

    1987-01-01

    The failure loads predicted by failure assessment methods (namely the net-section stress criterion; the EPRI engineering approach for elastic-plastic analysis; the CEGB failure assessment route; the modified R6 curve by Milne for strain hardening; and the failure assessment curve based on J estimation by Ainsworth) have been compared with burst test results on externally, axially sharp notched pressure vessel and open-ended cylinder models made from typical low-carbon steel St45 seamless tube which has a transverse true stress-strain curve of straight-line and parabola type and a high value of ultimate strength to yield. It was concluded from the comparison that whilst the net-section stress criterion and the CEGB route did not give conservative predictions, Milne's modified curve did give a conservative and good prediction; Ainsworth's curve gave a fairly conservative prediction; and EPRI solutions also could conditionally give a good prediction but the conditions are still somewhat uncertain. It is suggested that Milne's modified R6 curve is used in failure assessment of low-carbon steel pressure vessels. (author)

  4. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  5. Modeling Dynamic Anisotropic Behaviour and Spall Failure in Commercial Aluminium Alloys AA7010

    Mohd Nor, M. K.; Ma'at, N.; Ho, C. S.

    2018-04-01

    This paper presents a finite strain constitutive model to predict a complex elastoplastic deformation behaviour involves very high pressures and shockwaves in orthotropic materials of aluminium alloys. The previous published constitutive model is used as a reference to start the development in this work. The proposed formulation that used a new definition of Mandel stress tensor to define Hill's yield criterion and a new shock equation of state (EOS) of the generalised orthotropic pressure is further enhanced with Grady spall failure model to closely predict shockwave propagation and spall failure in the chosen commercial aluminium alloy. This hyperelastic-plastic constitutive model is implemented as a new material model in the Lawrence Livermore National Laboratory (LLNL)-DYNA3D code of UTHM's version, named Material Type 92 (Mat92). The implementations of a new EOS of the generalised orthotropic pressure including the spall failure are also discussed in this paper. The capability of the proposed constitutive model to capture the complex behaviour of the selected material is validated against range of Plate Impact Test data at 234, 450 and 895 ms-1 impact velocities.

  6. Simulations of stress evolution and the current density scaling of electromigration-induced failure times in pure and alloyed interconnects

    Park, Young-Joon; Andleigh, Vaibhav K.; Thompson, Carl V.

    1999-04-01

    An electromigration model is developed to simulate the reliability of Al and Al-Cu interconnects. A polynomial expression for the free energy of solution by Murray [Int. Met. Rev. 30, 211 (1985)] was used to calculate the chemical potential for Al and Cu while the diffusivities were defined based on a Cu-trapping model by Rosenberg [J. Vac. Sci. Technol. 9, 263 (1972)]. The effects of Cu on stress evolution and lifetime were investigated in all-bamboo and near-bamboo stud-to-stud structures. In addition, the significance of the effect of mechanical stress on the diffusivity of both Al and Cu was determined in all-bamboo and near-bamboo lines. The void nucleation and growth process was simulated in 200 μm, stud-to-stud lines. Current density scaling behavior for void-nucleation-limited failure and void-growth-limited failure modes was simulated in long, stud-to-stud lines. Current density exponents of both n=2 for void nucleation and n=1 for void growth failure modes were found in both pure Al and Al-Cu lines. Limitations of the most widely used current density scaling law (Black's equation) in the analysis of the reliability of stud-to-stud lines are discussed. By modifying the input materials properties used in this model (when they are known), this model can be adapted to predict the reliability of other interconnect materials such as pure Cu and Cu alloys.

  7. Discounting Models for Outcomes over Continuous Time

    Harvey, Charles M.; Østerdal, Lars Peter

    Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....

  8. Adjustment and Characterization of an Original Model of Chronic Ischemic Heart Failure in Pig

    Laurent Barandon

    2010-01-01

    Full Text Available We present and characterize an original experimental model to create a chronic ischemic heart failure in pig. Two ameroid constrictors were placed around the LAD and the circumflex artery. Two months after surgery, pigs presented a poor LV function associated with a severe mitral valve insufficiency. Echocardiography analysis showed substantial anomalies in radial and circumferential deformations, both on the anterior and lateral surface of the heart. These anomalies in function were coupled with anomalies of perfusion observed in echocardiography after injection of contrast medium. No demonstration of myocardial infarction was observed with histological analysis. Our findings suggest that we were able to create and to stabilize a chronic ischemic heart failure model in the pig. This model represents a useful tool for the development of new medical or surgical treatment in this field.

  9. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.

    2016-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  10. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  11. Observation Likelihood Model Design and Failure Recovery Scheme Toward Reliable Localization of Mobile Robots

    Chang-bae Moon

    2010-12-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  12. Semi-Markov models control of restorable systems with latent failures

    Obzherin, Yuriy E

    2015-01-01

    Featuring previously unpublished results, Semi-Markov Models: Control of Restorable Systems with Latent Failures describes valuable methodology which can be used by readers to build mathematical models of a wide class of systems for various applications. In particular, this information can be applied to build models of reliability, queuing systems, and technical control. Beginning with a brief introduction to the area, the book covers semi-Markov models for different control strategies in one-component systems, defining their stationary characteristics of reliability and efficiency, and uti

  13. Estimation of mean time to failure of a near surface radioactive waste repository for PWR power stations

    Aguiar, Lais A. de; Frutuoso e Melo, P.F.; Alvim, Antonio C.M.

    2007-01-01

    This work aims at estimating the mean time to failure (MTTF) of each barrier of a near surface radioactive waste repository. It is assumed that surface water infiltrates through the barriers, reaching the matrix where radionuclides are contained, releasing them to the environment. Radioactive wastes considered in this work are low and medium level wastes (produced during operation of a PWR nuclear power station) fixed on cement. The repository consists of 6 saturated porous media barriers (top cover, upper layer, packages, basis, repository walls and geosphere). It has been verified that the mean time to failure (MTTF) of each barrier increases for radionuclides having higher retardation factor (Fr) and also that the MTTF for concrete is larger for Nickel , while for the geosphere, Plutonium gives the largest MTTF. (author)

  14. Cap plasticity models and compactive and dilatant pre-failure deformation

    Fossum, Arlo F.; Fredrich, Joanne T.

    2000-01-01

    At low mean stresses, porous geomaterials fail by shear localization, and at higher mean stresses, they undergo strain-hardening behavior. Cap plasticity models attempt to model this behavior using a pressure-dependent shear yield and/or shear limit-state envelope with a hardening or hardening/softening elliptical end cap to define pore collapse. While these traditional models describe compactive yield and ultimate shear failure, difficulties arise when the behavior involves a transition from compactive to dilatant deformation that occurs before the shear failure or limit-state shear stress is reached. In this work, a continuous surface cap plasticity model is used to predict compactive and dilatant pre-failure deformation. During loading the stress point can pass freely through the critical state point separating compactive from dilatant deformation. The predicted volumetric strain goes from compactive to dilatant without the use of a non-associated flow rule. The new model is stable in that Drucker's stability postulates are satisfied. The study has applications to several geosystems of current engineering interest (oil and gas reservoirs, nuclear waste repositories, buried targets, and depleted reservoirs for possible use for subsurface sequestration of greenhouse gases)

  15. Promoting success or preventing failure: cultural differences in motivation by positive and negative role models.

    Lockwood, Penelope; Marshall, Tara C; Sadler, Pamela

    2005-03-01

    In two studies, cross-cultural differences in reactions to positive and negative role models were examined. The authors predicted that individuals from collectivistic cultures, who have a stronger prevention orientation, would be most motivated by negative role models, who highlight a strategy of avoiding failure; individuals from individualistic cultures, who have a stronger promotion focus, would be most motivated by positive role models, who highlight a strategy of pursuing success. In Study 1, the authors examined participants' reported preferences for positive and negative role models. Asian Canadian participants reported finding negative models more motivating than did European Canadians; self-construals and regulatory focus mediated cultural differences in reactions to role models. In Study 2, the authors examined the impact of role models on the academic motivation of Asian Canadian and European Canadian participants. Asian Canadians were motivated only by a negative model, and European Canadians were motivated only by a positive model.

  16. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Anggraeni, Novia Antika

    2015-04-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.

  17. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Anggraeni, Novia Antika

    2015-01-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days

  18. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com [Geophysics Sub-department, Physics Department, Faculty of Mathematic and Natural Science, Universitas Gadjah Mada. BLS 21 Yogyakarta 55281 (Indonesia)

    2015-04-24

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.

  19. On discrete models of space-time

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  20. A phenomenological variational multiscale constitutive model for intergranular failure in nanocrystalline materials

    Siddiq, A.

    2013-09-01

    We present a variational multiscale constitutive model that accounts for intergranular failure in nanocrystalline fcc metals due to void growth and coalescence in the grain boundary region. Following previous work by the authors, a nanocrystalline material is modeled as a two-phase material consisting of a grain interior phase and a grain boundary affected zone (GBAZ). A crystal plasticity model that accounts for the transition from partial dislocation to full dislocation mediated plasticity is used for the grain interior. Isotropic porous plasticity model with further extension to account for failure due to the void coalescence was used for the GBAZ. The extended model contains all the deformation phases, i.e. elastic deformation, plastic deformation including deviatoric and volumetric plasticity (void growth) followed by damage initiation and evolution due to void coalescence. Parametric studies have been performed to assess the model\\'s dependence on the different input parameters. The model is then validated against uniaxial loading experiments for different materials. Lastly we show the model\\'s ability to predict the damage and fracture of a dog-bone shaped specimen as observed experimentally. © 2013 Elsevier B.V.

  1. Modeling seasonality in bimonthly time series

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  2. Development of simplified 1D and 2D models for studying a PWR lower head failure under severe accident conditions

    Koundy, V.; Dupas, J.; Bonneville, H.; Cormeau, I.

    2005-01-01

    In the study of severe accidents of nuclear pressurized water reactors, the scenarios that describe the relocation of significant quantities of liquid corium at the bottom of the lower head are investigated from the mechanical point of view. In these scenarios, the risk of a breach and the possibility of a large quantity of corium being released from the lower head exist. This may lead to direct heating of the containment or outer vessel steam explosion. These issues are important due to their early containment failure potential. Since the TMI-2 accident, many theoretical and experimental investigations, relating to lower head mechanical behaviour under severe thermo-mechanical loading in the event of a core meltdown accident have been performed. IRSN participated actively in the one-fifth scale USNRC/SNL LHF and OECD LHF (OLHF) programs. Within the framework of these programs, two simplified models were developed by IRSN: the first is a simplified 1D approach based on the theory of pressurized spherical shells and the second is a simplified 2D model based on the theory of shells of revolution under symmetric loading. The mathematical formulation of both models and the creep constitutive equations used are presented in detail in this paper. The corresponding models were used to interpret some of the OLHF program experiments and the calculation results were quite consistent with the experimental data. The two simplified models have been used to simulate the thermo-mechanical behaviour of a 900 MWe pressurized water reactor lower head under severe accident conditions leading to failure. The average transient heat flux produced by the corium relocated at the bottom of the lower head has been determined using the IRSN HARAR code. Two different methods, both taking into account the ablation of the internal surface, are used to determine the temperature profiles across the lower head wall and their effect on the time to failure is discussed. Using these simplified models

  3. Remaining useful life estimation for deteriorating systems with time-varying operational conditions and condition-specific failure zones

    Li Qi

    2016-06-01

    Full Text Available Dynamic time-varying operational conditions pose great challenge to the estimation of system remaining useful life (RUL for the deteriorating systems. This paper presents a method based on probabilistic and stochastic approaches to estimate system RUL for periodically monitored degradation processes with dynamic time-varying operational conditions and condition-specific failure zones. The method assumes that the degradation rate is influenced by specific operational condition and moreover, the transition between different operational conditions plays the most important role in affecting the degradation process. These operational conditions are assumed to evolve as a discrete-time Markov chain (DTMC. The failure thresholds are also determined by specific operational conditions and described as different failure zones. The 2008 PHM Conference Challenge Data is utilized to illustrate our method, which contains mass sensory signals related to the degradation process of a commercial turbofan engine. The RUL estimation method using the sensor measurements of a single sensor was first developed, and then multiple vital sensors were selected through a particular optimization procedure in order to increase the prediction accuracy. The effectiveness and advantages of the proposed method are presented in a comparison with existing methods for the same dataset.

  4. Survey of time preference, delay discounting models

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  5. Modelling Dynamic Behaviour and Spall Failure of Aluminium Alloy AA7010

    Ma'at, N.; Nor, M. K. Mohd; Ismail, A. E.; Kamarudin, K. A.; Jamian, S.; Ibrahim, M. N.; Awang, M. K.

    2017-10-01

    A finite strain constitutive model to predict the dynamic deformation behaviour of Aluminium Alloy 7010 including shockwaves and spall failure is developed in this work. The important feature of this newly hyperelastic-plastic constitutive formulation is a new Mandel stress tensor formulated using new generalized orthotropic pressure. This tensor is combined with a shock equation of state (EOS) and Grady spall failure. The Hill’s yield criterion is adopted to characterize plastic orthotropy by means of the evolving structural tensors that is defined in the isoclinic configuration. This material model was developed and integration into elastic and plastic parts. The elastic anisotropy is taken into account through the newly stress tensor decomposition of a generalized orthotropic pressure. Plastic anisotropy is considered through yield surface and an isotropic hardening defined in a unique alignment of deviatoric plane within the stress space. To test its ability to describe shockwave propagation and spall failure, the new material model was implemented into the LLNL-DYNA3D code of UTHM’s. The capability of this newly constitutive model were compared against published experimental data of Plate Impact Test at 234m/s, 450m/s and 895m/s impact velocities. A good agreement is obtained between experimental and simulation in each test.

  6. Computer-assisted imaging algorithms facilitate histomorphometric quantification of kidney damage in rodent renal failure models

    Marcin Klapczynski

    2012-01-01

    Full Text Available Introduction: Surgical 5/6 nephrectomy and adenine-induced kidney failure in rats are frequently used models of progressive renal failure. In both models, rats develop significant morphological changes in the kidneys and quantification of these changes can be used to measure the efficacy of prophylactic or therapeutic approaches. In this study, the Aperio Genie Pattern Recognition technology, along with the Positive Pixel Count, Nuclear and Rare Event algorithms were used to quantify histological changes in both rat renal failure models. Methods: Analysis was performed on digitized slides of whole kidney sagittal sections stained with either hematoxylin and eosin or immunohistochemistry with an anti-nestin antibody to identify glomeruli, regenerating tubular epithelium, and tubulointerstitial myofibroblasts. An anti-polymorphonuclear neutrophil (PMN antibody was also used to investigate neutrophil tissue infiltration. Results: Image analysis allowed for rapid and accurate quantification of relevant histopathologic changes such as increased cellularity and expansion of glomeruli, renal tubular dilatation, and degeneration, tissue inflammation, and mineral aggregation. The algorithms provided reliable and consistent results in both control and experimental groups and presented a quantifiable degree of damage associated with each model. Conclusion: These algorithms represent useful tools for the uniform and reproducible characterization of common histomorphologic features of renal injury in rats.

  7. Slope failures and timing of turbidity flows north of Puerto Rico

    ten Brink, Uri S.; Chaytor, Jason D.

    2014-01-01

    The submerged carbonate platform north of Puerto Rico terminates in a high (3,000–4,000 m) and in places steep (>45°) slope characterized by numerous landslide scarps including two 30–50 km-wide amphitheater-shaped features. The origin of the steep platform edge and the amphitheaters has been attributed to: (1) catastrophic failure, or (2) localized failures and progressive erosion. Determining which of the two mechanisms has shaped the platform edge is critically important in understanding landslide-generated tsunami hazards in the region. Multibeam bathymetry, seismic reflection profiles, and a suite sediment cores from the Puerto Rico Trench and the slope between the trench and the platform edge were used to test these two hypotheses. Deposits within trench axis and at the base of the slope are predominantly composed of sandy carbonate turbidites and pelagic sediment with inter-fingering of chaotic debris units. Regionally-correlated turbidites within the upper 10 m of the trench sediments were dated between ∼25 and 22 kyrs and ∼18–19 kyrs for the penultimate and most recent events, respectively. Deposits on the slope are laterally discontinuous and vary from thin layers of fragmented carbonate platform material to thick pelagic layers. Large debris blocks or lobes are absent within the near-surface deposits at the trench axis and the base of slope basins. Progressive small-scale scalloping and self-erosion of the carbonate platform and underlying stratigraphy appears to be the most likely mechanism for recent development of the amphitheaters. These smaller scale failures may lead to the generation of tsunamis with local, rather than regional, impact.

  8. Forecasting with nonlinear time series models

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  9. Change over time in the effect of grade and ER on risk of distant failure in patients treated with breast-conserving therapy

    Gelman, Rebecca; Nixon, Asa J.; O'Neill, Anne; Harris, Jay R.

    1996-01-01

    Purpose: Most analyses of the effect of patient and tumor characteristics on long-term outcome in breast cancer use the Cox proportional hazard (prohaz) model, which assumes that hazard rates for any two subsets are proportional (i.e., hazard ratios are constant) over time. We examined whether this assumption is correct for predicting time to distant failure in breast cancer patients treated with breast-conserving therapy and speculated on the biologic implications of these findings. Materials and Methods: Between 1968 and 1986, 1081 patients treated for clinical stage I or II invasive breast cancer with a complete gross excision and ≥60 Gy to the tumor bed had pure infiltrating ductal carcinoma on central pathologic review. 37 patients (3%) were lost to followup after 7-181 months. Median followup for 694 survivors was 12 years (8-23 yrs.). Time to distant failure was defined to be time to regional nodal failure or distant metastases and was not censored for local recurrence, contralateral breast cancer, or death from other causes. We evaluated the following characteristics: histologic grade (modified Bloom-Richardson, 219 grade I, 482 grade II, 380 grade III), estrogen receptor (252 ER neg, 386 ER pos, 443 ER unk), positive axillary nodes (0,1-3,≥4, no axillary dissection in 214), adjuvant chemotherapy (in 291 patients), T stage, lymphatic vessel invasion, mononuclear cell response, clinical size in mm, age at diagnosis, and necrosis. Results: A stepwise prohaz model found all the above characteristics except the last three to be significant (all p 0 (i.e., grade III has a larger risk) but for following years, the log hazard ratio is < 0 (i.e., grade II has a large risk; see Figure for estimated log hazard ratio and 95% CI). The test for non-proportionality of grade II vs. grade I (p=0.08) and ER positive vs negative (p=0.06) were suggestive but the log hazard ratios never cross 0 (i.e., no reversal of direction of risk). Conclusions: Tumor grade clearly

  10. Bayesian Model Selection under Time Constraints

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  11. Application of mobile blood purification system in the treatment of acute renal failure dog model in the field environment

    Zhi-min ZHANG

    2014-01-01

    Full Text Available Objective To evaluate the stability, safety and efficacy of mobile blood purification system in the treatment of acute renal failure dog model in the field environment. Methods The acute renal failure model was established in 4 dogs by bilateral nephrectomy, which was thereafter treated with the mobile blood purification system. The evaluation of functional index of the mobile blood purification system was performed after a short-time (2 hours and conventional (4 hours dialysis treatment. Results The mobile blood purification system ran stably in the field environment at a blood flow of 150-180ml/min, a dialysate velocity of 2000ml/h, a replacement fluid velocity of 2000ml/h, and ultrafiltration rate of 100-200ml/h. All the functions of alarming system worked well, including static upper limit alarm of ultrafiltration pressure (>100 mmHg, upper limit alarm of ambulatory arterial pressure (>400mmHg, upper limit alarm of ambulatory venous pressure (>400mmHg, bubble alarm of vascular access, bubble alarm during the infusion of solutions, pressure alarm at the substitution pump segment and blood leaking alarm. The vital signs of the 4 dogs with acute renal failure kept stable during the treatment. After the treatment, a remarkable decrease was seen in the levels of serum urea nitrogen, creatinine and serum potassium (P0.05. Conclusions The mobile blood purification system runs normally even in a field environment. It is a flexible and portable device with a great performance in safety and stability in the treatment of acute renal failure. DOI: 10.11855/j.issn.0577-7402.2013.12.15

  12. Constitutive modeling of void-growth-based tensile ductile failures with stress triaxiality effects

    Mora Cordova, Angel

    2014-07-01

    In most metals and alloys, the evolution of voids has been generally recognized as the basic failure mechanism. Furthermore, stress triaxiality has been found to influence void growth dramatically. Besides strain intensity, it is understood to be the most important factor that controls the initiation of ductile fracture. We include sensitivity of stress triaxiality in a variational porous plasticity model, which was originally derived from hydrostatic expansion. Under loading conditions rather than hydrostatic deformation, we allow the critical pressure for voids to be exceeded so that the growth due to plasticity becomes dependent on the stress triaxiality. The limitations of the spherical void growth assumption are investigated. Our improved constitutive model is validated through good agreements with experimental data. Its capacity for reproducing realistic failure patterns is also indicated by a numerical simulation of a compact tensile (CT) test. © 2013 Elsevier Inc.

  13. A Novel Load Capacity Model with a Tunable Proportion of Load Redistribution against Cascading Failures

    Zhen-Hao Zhang

    2018-01-01

    Full Text Available Defence against cascading failures is of great theoretical and practical significance. A novel load capacity model with a tunable proportion is proposed. We take degree and clustering coefficient into account to redistribute the loads of broken nodes. The redistribution is local, where the loads of broken nodes are allocated to their nearest neighbours. Our model has been applied on artificial networks as well as two real networks. Simulation results show that networks get more vulnerable and sensitive to intentional attacks along with the decrease of average degree. In addition, the critical threshold from collapse to intact states is affected by the tunable parameter. We can adjust the tunable parameter to get the optimal critical threshold and make the systems more robust against cascading failures.

  14. Non-integrated electricity suppliers: the failure of an organisational model

    Boroumand, R.H.

    2009-01-01

    In the reference model of market liberalization, the reference business model is the pure electricity retailer. But bankruptcy, merger or vertical integration are indicative of the failure of this organizational model and its incapacity to manage efficiently the combination of sourcing and market risks in a setting of fierce price competition. Because of the structural dimension of electricity's volume risk, a supplier's level of risk exposure is unknown ex ante and will only be revealed ex post when consumption is known. Sourcing and selling portfolios of hedging contracts are incomplete risk management tools. Consequently, physical hedging is an essential complement to portfolios of contracts to overcome the pure supplier's curse. (author)

  15. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  16. The modelling and control of failure in bi-material ceramic laminates

    Phillipps, A.J.; Howard, S.J.; Clegg, W.J.; Clyne, T.W.

    1993-01-01

    Recent experimental and theoretical work on simple, single phase, laminated systems has indicated that failure resistant ceramics can be produced using an elegant method that avoids many of the problems and limitations of comparable fibrous ceramic composites. Theoretical work on these laminated systems has shown good agreement with experiment and simulated the effects of material properties and laminate structure on the composite performance. This work has provided guidelines for optimised laminate performance. In the current study, theoretical work has been simply extended to predict the behaviour of bi-material laminates with alternating layers of weak and strong material with different stiffnesses. Expressions for the strain energy release rates of internal advancing cracks are derived and combined with existing criteria to predict the failure behaviour of these laminates during bending. The modelling indicates three modes of failure dictated by the relative proportions, thicknesses and interfacial properties of the weak and strong phases. A critical percentage of strong phase is necessary to improve failure behaviour, in an identical argument to that for fibre composites. Incorporation of compliant layers is also investigated and implications for laminate design discussed. (orig.)

  17. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  18. Conceptual Modeling of Time-Varying Information

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  19. Modeling of Volatility with Non-linear Time Series Model

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  20. Corrosion induced failure analysis of subsea pipelines

    Yang, Yongsheng; Khan, Faisal; Thodi, Premkumar; Abbassi, Rouzbeh

    2017-01-01

    Pipeline corrosion is one of the main causes of subsea pipeline failure. It is necessary to monitor and analyze pipeline condition to effectively predict likely failure. This paper presents an approach to analyze the observed abnormal events to assess the condition of subsea pipelines. First, it focuses on establishing a systematic corrosion failure model by Bow-Tie (BT) analysis, and subsequently the BT model is mapped into a Bayesian Network (BN) model. The BN model facilitates the modelling of interdependency of identified corrosion causes, as well as the updating of failure probabilities depending on the arrival of new information. Furthermore, an Object-Oriented Bayesian Network (OOBN) has been developed to better structure the network and to provide an efficient updating algorithm. Based on this OOBN model, probability updating and probability adaptation are performed at regular intervals to estimate the failure probabilities due to corrosion and potential consequences. This results in an interval-based condition assessment of subsea pipeline subjected to corrosion. The estimated failure probabilities would help prioritize action to prevent and control failures. Practical application of the developed model is demonstrated using a case study. - Highlights: • A Bow-Tie (BT) based corrosion failure model linking causation with the potential losses. • A novel Object-Oriented Bayesian Network (OOBN) based corrosion failure risk model. • Probability of failure updating and adaptation with respect to time using OOBN model. • Application of the proposed model to develop and test strategies to minimize failure risk.

  1. Probabilistic Failure Analysis of Bone Using a Finite Element Model of Mineral-Collagen Composites

    Dong, X. Neil; Guda, Teja; Millwater, Harry R.; Wang, Xiaodu

    2008-01-01

    Microdamage accumulation is a major pathway for energy dissipation during the post-yield deformation of bone. In this study, a two-dimensional probabilistic finite element model of a mineral-collagen composite was developed to investigate the influence of the tissue and ultrastructural properties of bone on the evolution of microdamage from an initial defect in tension. The probabilistic failure analyses indicated that the microdamage progression would be along the plane of the initial defect...

  2. An investigation into failure of Internet firms: Towards development of a conceptual model

    Jiwat Ram

    2018-02-01

    Full Text Available The last two decades have witnessed an exponential growth in internet and social media based commerce in China, resulting in a number of foreign Internet firms launching their businesses to capitalize on the market opportunities. Surprisingly though, having been successful globally, these firms were not able to remain competitive in China, with majority of them suffering losses from their failed ventures, and ceasing their operations. Despite this ongoing problem, little or no research exists that might explain what is causing these problems. Addressing this gap in knowledge, we build literature-based insights and through our analysis, we: (1 provide a structured understanding of some of the major issues causing failures, (2 identify and categorize factors/sources of failures into internal versus external driven, and (3 grounded in theory and supplemented by literature evidence, develop hypotheses and a corresponding conceptual model explaining the relationships among these factors/sources and the failure of foreign Internet firms. The proposed model serves as a means by which Information systems managers / Chief Information Officers / Technology and Business managers can understand the sources of failures and conduct an introspective exercise within the firm to plug some gaps before launching a business in a foreign country. Academically, the study has developed a theoretically-grounded comprehensive model to advance knowledge in a scantly researched area of challenges faced by the foreign Internet firms and to help in the development of strategies to mitigate these problems. The proposed model also adds to the current knowledge on information systems socio-technical theory and Comparative theory of Competitive Advantage.

  3. Constitutive model with time-dependent deformations

    Krogsbøll, Anette

    1998-01-01

    are common in time as well as size. This problem is adressed by means of a new constitutive model for soils. It is able to describe the behavior of soils at different deformation rates. The model defines time-dependent and stress-related deformations separately. They are related to each other and they occur...... was the difference in time scale between the geological process of deposition (millions of years) and the laboratory measurements of mechanical properties (minutes or hours). In addition, the time scale relevant to the production history of the oil field was interesting (days or years)....

  4. The Process-Oriented Simulation (POS) model for common cause failures: recent progress

    Berg, H.P.; Goertz, R.; Schimetschka, E.; Kesten, J.

    2006-01-01

    A common-cause failure (CCF) model based on stochastic simulation has been developed to complement the established approaches and to overcome some of their shortcomings. Reflecting the models proximity to the CCF process it was called Process Oriented Simulation (POS) Model. In recent years, some progress has been made to render the POS model fit for practical applications comprising the development of parameter estimates and a number of test applications in areas where results were already available - especially from CCF benchmarks - and comparison can provide insights in strong and weak points of the different approaches. In this paper, a detailed description of the POS model is provided together with the approach to parameter estimation and representative test applications. It is concluded, that the POS model has a number of strengths - especially the feature to provide reasonable extrapolation to CCF groups with high degrees of redundancy - and thus a considerable potential to complement the insights obtained from existing modeling. (orig.)

  5. Building Chaotic Model From Incomplete Time Series

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  6. Modeling of Failure Prediction Bayesian Network with Divide-and-Conquer Principle

    Zhiqiang Cai

    2014-01-01

    Full Text Available For system failure prediction, automatically modeling from historical failure dataset is one of the challenges in practical engineering fields. In this paper, an effective algorithm is proposed to build the failure prediction Bayesian network (FPBN model with data mining technology. First, the conception of FPBN is introduced to describe the state of components and system and the cause-effect relationships among them. The types of network nodes, the directions of network edges, and the conditional probability distributions (CPDs of nodes in FPBN are discussed in detail. According to the characteristics of nodes and edges in FPBN, a divide-and-conquer principle based algorithm (FPBN-DC is introduced to build the best FPBN network structures of different types of nodes separately. Then, the CPDs of nodes in FPBN are calculated by the maximum likelihood estimation method based on the built network. Finally, a simulation study of a helicopter convertor model is carried out to demonstrate the application of FPBN-DC. According to the simulations results, the FPBN-DC algorithm can get better fitness value with the lower number of iterations, which verified its effectiveness and efficiency compared with traditional algorithm.

  7. Matrix Failure Modes and Effects Analysis as a Knowledge Base for a Real Time Automated Diagnosis Expert System

    Herrin, Stephanie; Iverson, David; Spukovska, Lilly; Souza, Kenneth A. (Technical Monitor)

    1994-01-01

    Failure Modes and Effects Analysis contain a wealth of information that can be used to create the knowledge base required for building automated diagnostic Expert systems. A real time monitoring and diagnosis expert system based on an actual NASA project's matrix failure modes and effects analysis was developed. This Expert system Was developed at NASA Ames Research Center. This system was first used as a case study to monitor the Research Animal Holding Facility (RAHF), a Space Shuttle payload that is used to house and monitor animals in orbit so the effects of space flight and microgravity can be studied. The techniques developed for the RAHF monitoring and diagnosis Expert system are general enough to be used for monitoring and diagnosis of a variety of other systems that undergo a Matrix FMEA. This automated diagnosis system was successfully used on-line and validated on the Space Shuttle flight STS-58, mission SLS-2 in October 1993.

  8. Compiling models into real-time systems

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  9. Compiling models into real-time systems

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  10. Performance deterioration modeling and optimal preventive maintenance strategy under scheduled servicing subject to mission time

    Li Dawei

    2014-08-01

    Full Text Available Servicing is applied periodically in practice with the aim of restoring the system state and prolonging the lifetime. It is generally seen as an imperfect maintenance action which has a chief influence on the maintenance strategy. In order to model the maintenance effect of servicing, this study analyzes the deterioration characteristics of system under scheduled servicing. And then the deterioration model is established from the failure mechanism by compound Poisson process. On the basis of the system damage value and failure mechanism, the failure rate refresh factor is proposed to describe the maintenance effect of servicing. A maintenance strategy is developed which combines the benefits of scheduled servicing and preventive maintenance. Then the optimization model is given to determine the optimal servicing period and preventive maintenance time, with an objective to minimize the system expected life-cycle cost per unit time and a constraint on system survival probability for the duration of mission time. Subject to mission time, it can control the ability of accomplishing the mission at any time so as to ensure the high dependability. An example of water pump rotor relating to scheduled servicing is introduced to illustrate the failure rate refresh factor and the proposed maintenance strategy. Compared with traditional methods, the numerical results show that the failure rate refresh factor can describe the maintenance effect of servicing more intuitively and objectively. It also demonstrates that this maintenance strategy can prolong the lifetime, reduce the total lifetime maintenance cost and guarantee the dependability of system.

  11. A 'cost-effective' probabilistic model to select the dominant factors affecting the variation of the component failure rate

    Kirchsteiger, C.

    1992-11-01

    Within the framework of a Probabilistic Safety Assessment (PSA), the component failure rate λ is a key parameter in the sense that the study of its behavior gives the essential information for estimating the current values as well as the trends in the failure probabilities of interest. Since there is an infinite variety of possible underlying factors which might cause changes in λ (e.g. operating time, maintenance practices, component environment, etc.), an 'importance ranking' process of these factors is considered most desirable to prioritize research efforts. To be 'cost-effective', the modeling effort must be small, i.e. essentially involving no estimation of additional parameters other than λ. In this paper, using a multivariate data analysis technique and various statistical measures, such a 'cost-effective' screening process has been developed. Dominant factors affecting the failure rate of any components of interest can easily be identified and the appropriateness of current research plans (e.g. on the necessity of performing aging studies) can be validated. (author)

  12. An overview of animal models for investigating the pathogenesis and therapeutic strategies in acute hepatic failure

    María Jesús Tu(n)ón; Marcelino Alvarez; Jesús M Culebras; Javier González-Gallego

    2009-01-01

    Acute hepatic failure (AHF) is a severe liver injury accompanied by hepatic encephalopathy which causes multiorgan failure with an extremely high mortality rate, even if intensive care is provided. Management of severe AHF continues to be one of the most challenging problems in clinical medicine. Liver transplantation has been shown to be the most effective therapy, but the procedure is limited by shortage of donor organs. Although a number of clinical trials testing different liver assist devices are under way, these systems alone have no significant effect on patient survival and are only regarded as a useful approach to bridge patients with AHF to liver transplantation. As a result, reproducible experimental animal models resembling the clinical conditions are still needed. The three main approaches used to create an animal model for AHF are: surgical procedures, toxic liver injury and infective procedures. Most common models are based on surgical techniques (total/partial hepatectomy, complete/transient devascularization) or the use of hepatotoxic drugs (acetaminophen, galactosamine, thioacetamide, and others), and very few satisfactory viral models are available. We have recently developed a viral model of AHF by meansof the inoculation of rabbits with the virus of rabbit hemorrhagic disease. This model displays biochemical and histological characteristics, and clinical features that resemble those in human AHF. In the present article an overview is given of the most widely used animal models of AHF, and their main advantages and disadvantages are reviewed.

  13. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  14. Modeling the recurrent failure to thrive in less than two-year children: recurrent events survival analysis.

    Saki Malehi, Amal; Hajizadeh, Ebrahim; Ahmadi, Kambiz; Kholdi, Nahid

    2014-01-01

    This study aimes to evaluate the failure to thrive (FTT) recurrent event over time. This longitudinal study was conducted during February 2007 to July 2009. The primary outcome was growth failure. The analysis was done using 1283 children who had experienced FTT several times, based on recurrent events analysis. Fifty-nine percent of the children had experienced the FTT at least one time and 5.3% of them had experienced it up to four times. The Prentice-Williams-Peterson (PWP) model revealed significant relationship between diarrhea (HR=1.26), respiratory infections (HR=1.25), urinary tract infections (HR=1.51), discontinuation of breast-feeding (HR=1.96), teething (HR=1.18), initiation age of complementary feeding (HR=1.11) and hazard rate of the first FTT event. Recurrence nature of the FTT is a main problem, which taking it into account increases the accuracy in analysis of FTT event process and can lead to identify different risk factors for each FTT recurrences.

  15. Electricity price modeling with stochastic time change

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  16. The development and application of overheating failure model of FBR steam generator tubes. 2

    Miyake, Osamu; Hamada, Hirotsugu; Tanabe, Hiromi

    2001-11-01

    The JNC technical report 'The Development and Application of Overheating Failure Model of FBR Steam Generator Tubes' summarized the assessment method and its application for the overheating tube failure in an event of sodium-water reaction accident of fast breeder reactor's steam generators (SGs). This report describes the following items studied after the publication of the above technical report. 1. On the basis of the SWAT-3 experimental data, realistic local heating conditions (reaction zone temperature and related heat transfer conditions) for the sodium-water reaction were proposed. New correlations are cosine-shaped temperature profiles with 1,170 C maximum for the 100% and 40% Monju operating conditions, and those with 1,110 C maximum for the 10% condition. 2. For the cooling effects inside of target tubes, LWR's studies of critical heat flux (CHF) and post-CHF heat transfer correlations have been examined and considered in the assessment. The revised assessment adopts the Katto's correlation for CHF, and the Condie-Bengston IV correlation for post-CHF. 3. Other additional examination for the assessment includes treatments of the whole heating effect (other than the local reaction zone) due to the sodium-water reaction, and the temperature-dependent thermal properties of the heat transfer tube material (2.25Cr-1Mo steel). The revised overheating tube failure assessment method has been applied to the Monju SG studies. It is revealed consequently that no tube failure occurs in 100%, 40%, and 10% operating conditions when an initial leak is detected by the cover gas pressure detection system. The assessment for the SG system improved for the detection and blowdown systems shows even better safety margins against the overheating tube failure. (author)

  17. Salt-induced changes in cardiac phosphoproteome in a rat model of chronic renal failure.

    Zhengxiu Su

    Full Text Available Heart damage is widely present in patients with chronic kidney disease. Salt diet is the most important environmental factor affecting development of chronic renal failure and cardiovascular diseases. The proteins involved in chronic kidney disease -induced heart damage, especially their posttranslational modifications, remain largely unknown to date. Sprague-Dawley rats underwent 5/6 nephrectomy (chronic renal failure model or sham operation were treated for 2 weeks with a normal-(0.4% NaCl, or high-salt (4% NaCl diet. We employed TiO2 enrichment, iTRAQ labeling and liquid-chromatography tandem mass spectrometry strategy for phosphoproteomic profiling of left ventricular free walls in these animals. A total of 1724 unique phosphopeptides representing 2551 non-redundant phosphorylation sites corresponding to 763 phosphoproteins were identified. During normal salt feeding, 89 (54% phosphopeptides upregulated and 76 (46% phosphopeptides downregulated in chronic renal failure rats relative to sham rats. In chronic renal failure rats, high salt intake induced upregulation of 84 (49% phosphopeptides and downregulation of 88 (51% phosphopeptides. Database searches revealed that most of the identified phospholproteins were important signaling molecules such as protein kinases, receptors and phosphatases. These phospholproteins were involved in energy metabolism, cell communication, cell differentiation, cell death and other biological processes. The Search Tool for the Retrieval of Interacting Genes analysis revealed functional links among 15 significantly regulated phosphoproteins in chronic renal failure rats compared to sham group, and 23 altered phosphoproteins induced by high salt intake. The altered phosphorylation levels of two proteins involved in heart damage, lamin A and phospholamban were validated. Expression of the downstream genes of these two proteins, desmin and SERCA2a, were also analyzed.

  18. Time series modeling in traffic safety research.

    Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue

    2018-08-01

    The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Failure of CMIP5 climate models in simulating post-1950 decreasing trend of Indian monsoon

    Saha, Anamitra; Ghosh, Subimal; Sahana, A. S.; Rao, E. P.

    2014-10-01

    Impacts of climate change on Indian Summer Monsoon Rainfall (ISMR) and the growing population pose a major threat to water and food security in India. Adapting to such changes needs reliable projections of ISMR by general circulation models. Here we find that, majority of new generation climate models from Coupled Model Intercomparison Project phase5 (CMIP5) fail to simulate the post-1950 decreasing trend of ISMR. The weakening of monsoon is associated with the warming of Southern Indian Ocean and strengthening of cyclonic formation in the tropical western Pacific Ocean. We also find that these large-scale changes are not captured by CMIP5 models, with few exceptions, which is the reason of this failure. Proper representation of these highlighted geophysical processes in next generation models may improve the reliability of ISMR projections. Our results also alert the water resource planners to evaluate the CMIP5 models before using them for adaptation strategies.

  20. Development of an invasively monitored porcine model of acetaminophen-induced acute liver failure

    Howie Forbes

    2010-03-01

    Full Text Available Abstract Background The development of effective therapies for acute liver failure (ALF is limited by our knowledge of the pathophysiology of this condition, and the lack of suitable large animal models of acetaminophen toxicity. Our aim was to develop a reproducible invasively-monitored porcine model of acetaminophen-induced ALF. Method 35kg pigs were maintained under general anaesthesia and invasively monitored. Control pigs received a saline infusion, whereas ALF pigs received acetaminophen intravenously for 12 hours to maintain blood concentrations between 200-300 mg/l. Animals surviving 28 hours were euthanased. Results Cytochrome p450 levels in phenobarbital pre-treated animals were significantly higher than non pre-treated animals (300 vs 100 pmol/mg protein. Control pigs (n = 4 survived 28-hour anaesthesia without incident. Of nine pigs that received acetaminophen, four survived 20 hours and two survived 28 hours. Injured animals developed hypotension (mean arterial pressure; 40.8 +/- 5.9 vs 59 +/- 2.0 mmHg, increased cardiac output (7.26 +/- 1.86 vs 3.30 +/- 0.40 l/min and decreased systemic vascular resistance (8.48 +/- 2.75 vs 16.2 +/- 1.76 mPa/s/m3. Dyspnoea developed as liver injury progressed and the increased pulmonary vascular resistance (636 +/- 95 vs 301 +/- 26.9 mPa/s/m3 observed may reflect the development of respiratory distress syndrome. Liver damage was confirmed by deterioration in pH (7.23 +/- 0.05 vs 7.45 +/- 0.02 and prothrombin time (36 +/- 2 vs 8.9 +/- 0.3 seconds compared with controls. Factor V and VII levels were reduced to 9.3 and 15.5% of starting values in injured animals. A marked increase in serum AST (471.5 +/- 210 vs 42 +/- 8.14 coincided with a marked reduction in serum albumin (11.5 +/- 1.71 vs 25 +/- 1 g/dL in injured animals. Animals displayed evidence of renal impairment; mean creatinine levels 280.2 +/- 36.5 vs 131.6 +/- 9.33 μmol/l. Liver histology revealed evidence of severe centrilobular necrosis

  1. Discrete-time rewards model-checked

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  2. Modeling vector nonlinear time series using POLYMARS

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  3. Forecasting with periodic autoregressive time series models

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  4. Time versus frequency domain measurements: layered model ...

    ... their high frequency content while among TEM data sets with low frequency content, the averaging times for the FEM ellipticity were shorter than the TEM quality. Keywords: ellipticity, frequency domain, frequency electromagnetic method, model parameter, orientation error, time domain, transient electromagnetic method

  5. Modeling nonhomogeneous Markov processes via time transformation.

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  6. Reliability prediction system based on the failure rate model for electronic components

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  7. Discrete-time modelling of musical instruments

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  8. Applying the Seattle Heart Failure Model in the Office Setting in the Era of Electronic Medical Records.

    Williams, Brent A; Agarwal, Shikhar

    2018-02-23

    Prediction models such as the Seattle Heart Failure Model (SHFM) can help guide management of heart failure (HF) patients, but the SHFM has not been validated in the office environment. This retrospective cohort study assessed the predictive performance of the SHFM among patients with new or pre-existing HF in the context of an office visit.Methods and Results:SHFM elements were ascertained through electronic medical records at an office visit. The primary outcome was all-cause mortality. A "warranty period" for the baseline SHFM risk estimate was sought by examining predictive performance over time through a series of landmark analyses. Discrimination and calibration were estimated according to the proposed warranty period. Low- and high-risk thresholds were proposed based on the distribution of SHFM estimates. Among 26,851 HF patients, 14,380 (54%) died over a mean 4.7-year follow-up period. The SHFM lost predictive performance over time, with C=0.69 and C<0.65 within 3 and beyond 12 months from baseline respectively. The diminishing predictive value was attributed to modifiable SHFM elements. Discrimination (C=0.66) and calibration for 12-month mortality were acceptable. A low-risk threshold of ∼5% mortality risk within 12 months reflects the 10% of HF patients in the office setting with the lowest risk. The SHFM has utility in the office environment.

  9. Multi-state reliability for coolant pump based on dependent competitive failure model

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2013-01-01

    By taking into account the effect of degradation due to internal vibration and external shocks. and based on service environment and degradation mechanism of nuclear power plant coolant pump, a multi-state reliability model of coolant pump was proposed for the system that involves competitive failure process between shocks and degradation. Using this model, degradation state probability and system reliability were obtained under the consideration of internal vibration and external shocks for the degraded coolant pump. It provided an effective method to reliability analysis for coolant pump in nuclear power plant based on operating environment. The results can provide a decision making basis for design changing and maintenance optimization. (authors)

  10. A review of typical thermal fatigue failure models for solder joints of electronic components

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  11. Numerical models for the prediction of failure for multilayer fusion Al-alloy sheets

    Gorji, Maysam; Berisha, Bekim; Hora, Pavel; Timm, Jürgen

    2013-01-01

    Initiation and propagation of cracks in monolithic and multi-layer aluminum alloys, called “Fusion”, is investigated. 2D plane strain finite element simulations are performed to model deformation due to bending and to predict failure. For this purpose, fracture strains are measured based on microscopic pictures of Nakajima specimens. In addition to, micro-structure of materials is taken into account by introducing a random grain distribution over the sheet thickness as well as a random distribution of the measured yield curve. It is shown that the performed experiments and the introduced FE-Model are appropriate methods to highlight the advantages of the Fusion material, especially for bending processes

  12. Adjustable, physiological ventricular restraint improves left ventricular mechanics and reduces dilatation in an ovine model of chronic heart failure.

    Ghanta, Ravi K; Rangaraj, Aravind; Umakanthan, Ramanan; Lee, Lawrence; Laurence, Rita G; Fox, John A; Bolman, R Morton; Cohn, Lawrence H; Chen, Frederick Y

    2007-03-13

    Ventricular restraint is a nontransplantation surgical treatment for heart failure. The effect of varying restraint level on left ventricular (LV) mechanics and remodeling is not known. We hypothesized that restraint level may affect therapy efficacy. We studied the immediate effect of varying restraint levels in an ovine heart failure model. We then studied the long-term effect of restraint applied over a 2-month period. Restraint level was quantified by use of fluid-filled epicardial balloons placed around the ventricles and measurement of balloon luminal pressure at end diastole. At 4 different restraint levels (0, 3, 5, and 8 mm Hg), transmural myocardial pressure (P(tm)) and indices of myocardial oxygen consumption (MVO2) were determined in control (n=5) and ovine heart failure (n=5). Ventricular restraint therapy decreased P(tm) and MVO2, and improved mechanical efficiency. An optimal physiological restraint level of 3 mm Hg was identified to maximize improvement without an adverse affect on systemic hemodynamics. At this optimal level, end-diastolic P(tm) and MVO2 indices decreased by 27% and 20%, respectively. The serial longitudinal effects of optimized ventricular restraint were then evaluated in ovine heart failure with (n=3) and without (n=3) restraint over 2 months. Optimized ventricular restraint prevented and reversed pathological LV dilatation (130+/-22 mL to 91+/-18 mL) and improved LV ejection fraction (27+/-3% to 43+/-5%). Measured restraint level decreased over time as the LV became smaller, and reverse remodeling slowed. Ventricular restraint level affects the degree of decrease in P(tm), the degree of decrease in MVO2, and the rate of LV reverse remodeling. Periodic physiological adjustments of restraint level may be required for optimal restraint therapy efficacy.

  13. Immunomodulatory and antioxidant function of albumin stabilises the endothelium and improves survival in a rodent model of chronic liver failure.

    Garcia-Martinez, Rita; Andreola, Fausto; Mehta, Gautam; Poulton, Katie; Oria, Marc; Jover, Maria; Soeda, Junpei; Macnaughtan, Jane; De Chiara, Francesco; Habtesion, Abeba; Mookerjee, Rajeshwar P; Davies, Nathan; Jalan, Rajiv

    2015-04-01

    Liver failure is characterized by endothelial dysfunction, which results in hemodynamic disturbances leading to renal failure. Albumin infusion improves hemodynamics and prevents renal dysfunction in advance liver failure. These effects are only partly explained by the oncotic properties of albumin. This study was designed to test the hypothesis that albumin exerts its beneficial effects by stabilising endothelial function. In vivo: systemic hemodynamics, renal function, markers of endothelial dysfunction (ADMA) and inflammation were studied in analbuminaemic and Sprague-Dawley rats, 6-weeks after sham/bile duct ligation surgery. In vitro: human umbilical vein endothelial cells were stimulated with LPS with or without albumin. We studied protein expression and gene expression of adhesion molecules, intracellular reactive oxygen species, and cell stress markers. Compared to controls, analbuminaemic rats had significantly greater hemodynamic deterioration after bile duct ligation, resulting in worse renal function and shorter survival. This was associated with significantly greater plasma renin activity, worse endothelial function, and disturbed inflammatory response. In vitro studies showed that albumin was actively taken up by endothelial cells. Incubation of albumin pre-treated endothelial cells with LPS was associated with significantly less activation compared with untreated cells, decreased intracellular reactive oxygen species, and markers of cell stress. These results show, for the first time, that absence of albumin is characterised by worse systemic hemodynamics, renal function and higher mortality in a rodent model of chronic liver failure and illustrates the important non-oncotic properties of albumin in protecting against endothelial dysfunction. Copyright © 2015. Published by Elsevier B.V.

  14. Development and validation of a dynamic outcome prediction model for paracetamol-induced acute liver failure

    Bernal, William; Wang, Yanzhong; Maggs, James

    2016-01-01

    : The models developed here show very good discrimination and calibration, confirmed in independent datasets, and suggest that many patients undergoing transplantation based on existing criteria might have survived with medical management alone. The role and indications for emergency liver transplantation......BACKGROUND: Early, accurate prediction of survival is central to management of patients with paracetamol-induced acute liver failure to identify those needing emergency liver transplantation. Current prognostic tools are confounded by recent improvements in outcome independent of emergency liver...... transplantation, and constrained by static binary outcome prediction. We aimed to develop a simple prognostic tool to reflect current outcomes and generate a dynamic updated estimation of risk of death. METHODS: Patients with paracetamol-induced acute liver failure managed at intensive care units in the UK...

  15. Comparing Two Different Approaches to the Modeling of the Common Cause Failures in Fault Trees

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2002-01-01

    The potential for common cause failures in systems that perform critical functions has been recognized as very important contributor to risk associated with operation of nuclear power plants. Consequentially, modeling of common cause failures (CCF) in fault trees has become one among the essential elements in any probabilistic safety assessment (PSA). Detailed and realistic representation of CCF potential in fault tree structure is sometimes very challenging task. This is especially so in the cases where a common cause group involves more than two components. During the last ten years the difficulties associated with this kind of modeling have been overcome to some degree by development of integral PSA tools with high capabilities. Some of them allow for the definition of CCF groups and their automated expanding in the process of Boolean resolution and generation of minimal cutsets. On the other hand, in PSA models developed and run by more traditional tools, CCF-potential had to be modeled in the fault trees explicitly. With explicit CCF modeling, fault trees can grow very large, especially in the cases when they involve CCF groups with 3 or more members, which can become an issue for the management of fault trees and basic events with traditional non-integral PSA models. For these reasons various simplifications had to be made. Speaking in terms of an overall PSA model, there are also some other issues that need to be considered, such as maintainability and accessibility of the model. In this paper a comparison is made between the two approaches to CCF modeling. Analysis is based on a full-scope Level 1 PSA model for internal initiating events that had originally been developed by a traditional PSA tool and later transferred to a new-generation PSA tool with automated CCF modeling capabilities. Related aspects and issues mentioned above are discussed in the paper. (author)

  16. Comparative Effectiveness of Low-Volume Time-Efficient Resistance Training Versus Endurance Training in Patients With Heart Failure

    Munch, Gregers Winding; Birgitte Rosenmeier, Jaya; Petersen, Morten

    2018-01-01

    -related quality of life in lower New York Heart Association-stage HF patients, despite less time required as well as lower energy expenditure during TRE than during AMC. Therefore, TRE might represent a time-efficient exercise modality for improving adherence to exercise in patients with class I-II HF.......PURPOSE: Cardiorespiratory fitness is positively related to heart failure (HF) prognosis, but lack of time and low energy are barriers for adherence to exercise. We, therefore, compared the effect of low-volume time-based resistance exercise training (TRE) with aerobic moderate-intensity cycling...... (AMC) on maximal and submaximal exercise capacity, health-related quality of life, and vascular function. METHODS: Twenty-eight HF patients (New York Heart Association class I-II) performed AMC (n = 14) or TRE (n = 14). Maximal and submaximal exercise capacity, health-related quality of life...

  17. A time-dependent event tree technique for modelling recovery operations

    Kohut, P.; Fitzpatrick, R.

    1991-01-01

    The development of a simplified time dependent event tree methodology is presented. The technique is especially applicable to describe recovery operations in nuclear reactor accident scenarios initiated by support system failures. The event tree logic is constructed using time dependent top events combined with a damage function that contains information about the final state time behavior of the reactor core. Both the failure and the success states may be utilized for the analysis. The method is illustrated by modeling the loss of service water function with special emphasis on the RCP [reactor coolant pump] seal LOCA [loss of coolant accident] scenario. 5 refs., 2 figs., 2 tabs

  18. Transit time homogenization in ischemic stroke - A novel biomarker of penumbral microvascular failure?

    Engedal, Thorbjørn S; Hjort, Niels; Hougaard, Kristina D

    2017-01-01

    Cerebral ischemia causes widespread capillary no-flow in animal studies. The extent of microvascular impairment in human stroke, however, is unclear. We examined how acute intra-voxel transit time characteristics and subsequent recanalization affect tissue outcome on follow-up MRI in a historic...... cohort of 126 acute ischemic stroke patients. Based on perfusion-weighted MRI data, we characterized voxel-wise transit times in terms of their mean transit time (MTT), standard deviation (capillary transit time heterogeneity - CTH), and the CTH:MTT ratio (relative transit time heterogeneity), which...... tissue, prolonged mean transit time (>5 seconds) and very low cerebral blood flow (≤6 mL/100 mL/min) was associated with high risk of infarction, largely independent of recanalization status. In the remaining mismatch region, low relative transit time heterogeneity predicted subsequent infarction...

  19. Choosing an optimal model for failure data analysis by graphical approach

    Zhang, Tieling; Dwight, Richard

    2013-01-01

    Many models involving combination of multiple Weibull distributions, modification of Weibull distribution or extension of its modified ones, etc. have been developed to model a given set of failure data. The application of these models to modeling a given data set can be based on plotting the data on Weibull probability paper (WPP). Of them, two or more models are appropriate to model one typical shape of the fitting plot, whereas a specific model may be fit for analyzing different shapes of the plots. Hence, a problem arises, that is how to choose an optimal model for a given data set and how to model the data. The motivation of this paper is to address this issue. This paper summarizes the characteristics of Weibull-related models with more than three parameters including sectional models involving two or three Weibull distributions, competing risk model and mixed Weibull model. The models as discussed in this present paper are appropriate to model the data of which the shapes of plots on WPP can be concave, convex, S-shaped or inversely S-shaped. Then, the method for model selection is proposed, which is based on the shapes of the fitting plots. The main procedure for parameter estimation of the models is described accordingly. In addition, the range of data plots on WPP is clearly highlighted from the practical point of view. To note this is important as mathematical analysis of a model with neglecting the applicable range of the model plot will incur discrepancy or big errors in model selection and parameter estimates

  20. Modelling software failures of digital I and C in probabilistic safety analyses based on the TELEPERM registered XS operating experience

    Jockenhoevel-Barttfeld, Mariana; Taurines Andre; Baeckstroem, Ola; Holmberg, Jan-Erik; Porthin, Markus; Tyrvaeinen, Tero

    2015-01-01

    Digital instrumentation and control (I and C) systems appear as upgrades in existing nuclear power plants (NPPs) and in new plant designs. In order to assess the impact of digital system failures, quantifiable reliability models are needed along with data for digital systems that are compatible with existing probabilistic safety assessments (PSA). The paper focuses on the modelling of software failures of digital I and C systems in probabilistic assessments. An analysis of software faults, failures and effects is presented to derive relevant failure modes of system and application software for the PSA. The estimations of software failure probabilities are based on an analysis of the operating experience of TELEPERM registered XS (TXS). For the assessment of application software failures the analysis combines the use of the TXS operating experience at an application function level combined with conservative engineering judgments. Failure probabilities to actuate on demand and of spurious actuation of typical reactor protection application are estimated. Moreover, the paper gives guidelines for the modelling of software failures in the PSA. The strategy presented in this paper is generic and can be applied to different software platforms and their applications.

  1. Day vs night : Does time of presentation matter in acute heart failure? A secondary analysis from the RELAX-AHF trial

    Pang, Peter S.; Teerlink, John R.; Boer-Martins, Leandro; Gimpelewicz, Claudio; Davison, Beth A.; Wang, Yi; Voors, Adriaan A.; Severin, Thomas; Ponikowski, Piotr; Hua, Tsushung A.; Greenberg, Barry H.; Filippatos, Gerasimos; Felker, G. Michael; Cotter, Gad; Metra, Marco

    Background Signs and symptoms of heart failure can occur at any time. Differences between acute heart failure (AHF) patients who present at nighttime vs daytime and their outcomes have not been well studied. Our objective was to determine if there are differences in baseline characteristics and

  2. Modeling biological pathway dynamics with timed automata.

    Schivo, Stefano; Scholma, Jetse; Wanders, Brend; Urquidi Camacho, Ricardo A; van der Vet, Paul E; Karperien, Marcel; Langerak, Rom; van de Pol, Jaco; Post, Janine N

    2014-05-01

    Living cells are constantly subjected to a plethora of environmental stimuli that require integration into an appropriate cellular response. This integration takes place through signal transduction events that form tightly interconnected networks. The understanding of these networks requires capturing their dynamics through computational support and models. ANIMO (analysis of Networks with Interactive Modeling) is a tool that enables the construction and exploration of executable models of biological networks, helping to derive hypotheses and to plan wet-lab experiments. The tool is based on the formalism of Timed Automata, which can be analyzed via the UPPAAL model checker. Thanks to Timed Automata, we can provide a formal semantics for the domain-specific language used to represent signaling networks. This enforces precision and uniformity in the definition of signaling pathways, contributing to the integration of isolated signaling events into complex network models. We propose an approach to discretization of reaction kinetics that allows us to efficiently use UPPAAL as the computational engine to explore the dynamic behavior of the network of interest. A user-friendly interface hides the use of Timed Automata from the user, while keeping the expressive power intact. Abstraction to single-parameter kinetics speeds up construction of models that remain faithful enough to provide meaningful insight. The resulting dynamic behavior of the network components is displayed graphically, allowing for an intuitive and interactive modeling experience.

  3. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  4. Modeling of Electrical Cable Failure in a Dynamic Assessment of Fire Risk

    Bucknor, Matthew D.

    Fires at a nuclear power plant are a safety concern because of their potential to defeat the redundant safety features that provide a high level of assurance of the ability to safely shutdown the plant. One of the added complexities of providing protection against fires is the need to determine the likelihood of electrical cable failure which can lead to the loss of the ability to control or spurious actuation of equipment that is required for safe shutdown. A number of plants are now transitioning from their deterministic fire protection programs to a risk-informed, performance based fire protection program according to the requirements of National Fire Protection Association (NFPA) 805. Within a risk-informed framework, credit can be taken for the analysis of fire progression within a fire zone that was not permissible within the deterministic framework of a 10 CFR 50.48 Appendix R safe shutdown analysis. To perform the analyses required for the transition, plants need to be able to demonstrate with some level of assurance that cables related to safe shutdown equipment will not be compromised during postulated fire scenarios. This research contains the development of new cable failure models that have the potential to more accurately predict electrical cable failure in common cable bundle configurations. Methods to determine the thermal properties of the new models from empirical data are presented along with comparisons between the new models and existing techniques used in the nuclear industry today. A Dynamic Event Tree (DET) methodology is also presented which allows for the proper treatment of uncertainties associated with fire brigade intervention and its effects on cable failure analysis. Finally a shielding analysis is performed to determine the effects on the temperature response of a cable bundle that is shielded from a fire source by an intervening object such as another cable tray. The results from the analyses demonstrate that models of similar

  5. New finite element-based modeling of reactor core support plate failure

    Pandazis, Peter; Lovasz, Liviusz [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH, Garching (Germany). Forschungszentrum; Babcsany, Boglarka [Budapest Univ. of Technology and Economics, Budapest (Hungary). Inst. of Nuclear Techniques; Hajas, Tamas

    2017-12-15

    ATHLET-CD is the severe accident module of the code system AC{sup 2} that is designed to simulate the core degradation phenomena including fission product release and transport in the reactor circuit, as well as the late phase processes in the lower plenum. In case of a severe accident degradation of the reactor core occurs, the fuel assemblies start to melt. The evolution of such processes is usually accompanied with the failure of the core support plate and relocation of the molten core to the lower plenum. Currently, the criterion for the failure of the support plate applied by ATHLET-CD is a user-defined signal which can be a specific time or process variable like mass, temperature, etc. A new method, based on FEM approach, was developed that could lead in the future to a more realistic criterion for the failure of the core support plate. This paper presents the basic idea and theory of this new method as well as preliminary verification calculations and an outlook on the planned future development.

  6. Stimulation of ganglionated plexus attenuates cardiac neural remodeling and heart failure progression in a canine model of acute heart failure post-myocardial infarction.

    Luo, Da; Hu, Huihui; Qin, Zhiliang; Liu, Shan; Yu, Xiaomei; Ma, Ruisong; He, Wenbo; Xie, Jing; Lu, Zhibing; He, Bo; Jiang, Hong

    2017-12-01

    Heart failure (HF) is associated with autonomic dysfunction. Vagus nerve stimulation has been shown to improve cardiac function both in HF patients and animal models of HF. The purpose of this present study is to investigate the effects of ganglionated plexus stimulation (GPS) on HF progression and autonomic remodeling in a canine model of acute HF post-myocardial infarction. Eighteen adult mongrel male dogs were randomized into the control (n=8) and GPS (n=10) groups. All dogs underwent left anterior descending artery ligation followed by 6-hour high-rate (180-220bpm) ventricular pacing to induce acute HF. Transthoracic 2-dimensional echocardiography was performed at different time points. The plasma levels of norepinephrine, B-type natriuretic peptide (BNP) and Ang-II were measured using ELISA kits. C-fos and nerve growth factor (NGF) proteins expressed in the left stellate ganglion as well as GAP43 and TH proteins expressed in the peri-infarct zone were measured using western blot. After 6h of GPS, the left ventricular end-diastolic volume, end-systolic volume and ejection fraction showed no significant differences between the 2 groups, but the interventricular septal thickness at end-systole in the GPS group was significantly higher than that in the control group. The plasma levels of norepinephrine, BNP, Ang-II were increased 1h after myocardial infarction while the increase was attenuated by GPS. The expression of c-fos and NGF proteins in the left stellate ganglion as well as GAP43 and TH proteins in cardiac peri-infarct zone in GPS group were significantly lower than that in control group. GPS inhibits cardiac sympathetic remodeling and attenuates HF progression in canines with acute HF induced by myocardial infarction and ventricular pacing. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Micromechanics-based damage model for failure prediction in cold forming

    Lu, X.Z.; Chan, L.C., E-mail: lc.chan@polyu.edu.hk

    2017-04-06

    The purpose of this study was to develop a micromechanics-based damage (micro-damage) model that was concerned with the evolution of micro-voids for failure prediction in cold forming. Typical stainless steel SS316L was selected as the specimen material, and the nonlinear isotropic hardening rule was extended to describe the large deformation of the specimen undergoing cold forming. A micro-focus high-resolution X-ray computed tomography (CT) system was employed to trace and measure the micro-voids inside the specimen directly. Three-dimensional (3D) representative volume element (RVE) models with different sizes and spatial locations were reconstructed from the processed CT images of the specimen, and the average size and volume fraction of micro-voids (VFMV) for the specimen were determined via statistical analysis. Subsequently, the micro-damage model was compiled as a user-defined material subroutine into the finite element (FE) package ABAQUS. The stress-strain responses and damage evolutions of SS316L specimens under tensile and compressive deformations at different strain rates were predicted and further verified experimentally. It was concluded that the proposed micro-damage model is convincing for failure prediction in cold forming of the SS316L material.

  8. Prediction of the fuel failure following a large LOCA using modified gap heat transfer model

    Lee, K.M.; Lee, N.H.; Huh, J.Y.; Seo, S.K.; Choi, J.H.

    1995-01-01

    The modified Ross and Stoute gap heat transfer model in the ELOCA.Mk5 code for CANDU safety analysis is based on a simplified thermal deformation model. A review on a series of recent experiments reveals that fuel pellets crack, relocate, and are eccentrically positioned within the sheath rather than solid concentric cylinders. In this study, more realistic offset crap conductance model is implemented in the code to estimate the fuel failure thresholds usincr the transient conditions of a 100% Reactor Outlet Header (ROH) break LOCA. Based on the offset gap conductance model, the total release of I-131 from the failed fuel elements in the core is reduced from 3876 TBq to 3283 TBq to increase margin for dose limit. (author)

  9. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  10. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  11. Failure time series prediction in industrial maintenance using neural networks; Previsao de series temporais de falhas em manutencao industrial usando redes neurais

    Torres Junior, Rubiao G.; Machado, Maria Augusta S. [Instituto Brasileiro de Mercado de Capitais (IBMEC), Rio de Janeiro, RJ (Brazil); Souza, Reinaldo C. [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2005-07-01

    The objective of this work is the application of two failure prediction models in industrial maintenance with the use of Artificial Neural Networks (ANN). A characteristic of the modern industrial environment is a strong competition which leads companies to search for costs minimization methods. Thus, dada gathering and maintenance dada treatment becomes extremely important in this scenario for it aims the equipment and plant systems real repair necessity. Therefore, the objective becomes the widening of the system's full activity in a continuous manner, in the required period, without problems in their integrating parts. A daily time series is modeled based on maintenance interventions pauses dada from a five years period derived form many productive systems in the finalization areas of PETROFLEX Ind. and Com. S.A. Thus, the purpose is to introduce models based on neural networks and verify its system's pauses prediction capacity, so as to intervene with adequate timing before the system fails, extend the operational period and consequently increase its availability. The results obtained in this work demonstrate the employment of Neural Networks in the prediction of pauses in PETROFLEX industrial area maintenance. The ANN's prediction capacity in a group of dada with strong non-linear component where other statistical techniques have shown little efficient has also been confirmed. Discover neural models to predict failure systems time series has enable a breakthrough in the research field, especially due to the market demand. It's no doubt a technique that will evolve in the industrial maintenance area financing important managing decision. Prediction techniques, such as the ones illustrated in this study, work side by side maintenance planning and if carefully implemented and followed up can in the medium run supply a substantial increase in the available operational hours. (author)

  12. Modeling of the time sharing for lecturers

    E. Yu. Shakhova

    2017-01-01

    Full Text Available In the context of modernization of the Russian system of higher education, it is necessary to analyze the working time of the university lecturers, taking into account both basic job functions as the university lecturer, and others.The mathematical problem is presented for the optimal working time planning for the university lecturers. The review of the documents, native and foreign works on the study is made. Simulation conditions, based on analysis of the subject area, are defined. Models of optimal working time sharing of the university lecturers («the second half of the day» are developed and implemented in the system MathCAD. Optimal solutions have been obtained.Three problems have been solved:1 to find the optimal time sharing for «the second half of the day» in a certain position of the university lecturer;2 to find the optimal time sharing for «the second half of the day» for all positions of the university lecturers in view of the established model of the academic load differentiation;3 to find the volume value of the non-standardized part of time work in the department for the academic year, taking into account: the established model of an academic load differentiation, distribution of the Faculty number for the positions and the optimal time sharing «the second half of the day» for the university lecturers of the department.Examples are given of the analysis results. The practical application of the research: the developed models can be used when planning the working time of an individual professor in the preparation of the work plan of the university department for the academic year, as well as to conduct a comprehensive analysis of the administrative decisions in the development of local university regulations.

  13. Chiller Design Model - Impact of chiller failure on the short-term temperature variation in the incubation of salmonids

    National Oceanic and Atmospheric Administration, Department of Commerce — In salmon recovery programs it is commonly necessary to chill incubation and early rearing temperatures to match wild development times. The most common failure mode...

  14. Estimating High-Dimensional Time Series Models

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  15. Real-time modeling of heat distributions

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  16. Noise effects on the health status in a dynamic failure model for living organisms

    Kang, H.; Jo, J.; Choi, M. Y.; Choi, J.; Yoon, B.-G.

    2007-03-01

    We study internal and external noise effects on the healthy-unhealthy transition and related phenomena in a dynamic failure model for living organisms. It is found that internal noise makes the system weaker, leading to breakdown under smaller stress. The discontinuous healthy-unhealthy transition in a system with global load sharing below a critical point is naturally explained in terms of the bistability for the health status. External noise present in constant stress gives similar results; further, it induces resonance in response to periodic stress, regardless of load transfer. In the case of local load sharing, such periodic stress is revealed more hazardous than the constant stress.

  17. Inflammation, Self-Regulation, and Health: An Immunologic Model of Self-Regulatory Failure.

    Shields, Grant S; Moons, Wesley G; Slavich, George M

    2017-07-01

    Self-regulation is a fundamental human process that refers to multiple complex methods by which individuals pursue goals in the face of distractions. Whereas superior self-regulation predicts better academic achievement, relationship quality, financial and career success, and lifespan health, poor self-regulation increases a person's risk for negative outcomes in each of these domains and can ultimately presage early mortality. Given its centrality to understanding the human condition, a large body of research has examined cognitive, emotional, and behavioral aspects of self-regulation. In contrast, relatively little attention has been paid to specific biologic processes that may underlie self-regulation. We address this latter issue in the present review by examining the growing body of research showing that components of the immune system involved in inflammation can alter neural, cognitive, and motivational processes that lead to impaired self-regulation and poor health. Based on these findings, we propose an integrated, multilevel model that describes how inflammation may cause widespread biobehavioral alterations that promote self-regulatory failure. This immunologic model of self-regulatory failure has implications for understanding how biological and behavioral factors interact to influence self-regulation. The model also suggests new ways of reducing disease risk and enhancing human potential by targeting inflammatory processes that affect self-regulation.

  18. Space-time modeling of timber prices

    Mo Zhou; Joseph Buongriorno

    2006-01-01

    A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...

  19. On modeling panels of time series

    Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractThis paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a

  20. Time Series Modelling using Proc Varmax

    Milhøj, Anders

    2007-01-01

    In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box...