WorldWideScience

Sample records for time-variant failure probability

  1. Estimation of failure probabilities of linear dynamic systems by ...

    Indian Academy of Sciences (India)

    An iterative method for estimating the failure probability for certain time-variant reliability problems has been developed. In the paper, the focus is on the displacement response of a linear oscillator driven by white noise. Failure is then assumed to occur when the displacement response exceeds a critical threshold.

  2. Time-variant reliability assessment through equivalent stochastic process transformation

    International Nuclear Information System (INIS)

    Wang, Zequn; Chen, Wei

    2016-01-01

    Time-variant reliability measures the probability that an engineering system successfully performs intended functions over a certain period of time under various sources of uncertainty. In practice, it is computationally prohibitive to propagate uncertainty in time-variant reliability assessment based on expensive or complex numerical models. This paper presents an equivalent stochastic process transformation approach for cost-effective prediction of reliability deterioration over the life cycle of an engineering system. To reduce the high dimensionality, a time-independent reliability model is developed by translating random processes and time parameters into random parameters in order to equivalently cover all potential failures that may occur during the time interval of interest. With the time-independent reliability model, an instantaneous failure surface is attained by using a Kriging-based surrogate model to identify all potential failure events. To enhance the efficacy of failure surface identification, a maximum confidence enhancement method is utilized to update the Kriging model sequentially. Then, the time-variant reliability is approximated using Monte Carlo simulations of the Kriging model where system failures over a time interval are predicted by the instantaneous failure surface. The results of two case studies demonstrate that the proposed approach is able to accurately predict the time evolution of system reliability while requiring much less computational efforts compared with the existing analytical approach. - Highlights: • Developed a new approach for time-variant reliability analysis. • Proposed a novel stochastic process transformation procedure to reduce the dimensionality. • Employed Kriging models with confidence-based adaptive sampling scheme to enhance computational efficiency. • The approach is effective for handling random process in time-variant reliability analysis. • Two case studies are used to demonstrate the efficacy

  3. Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods

    Science.gov (United States)

    Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed

    2018-04-01

    This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.

  4. Failure probability analysis of optical grid

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  5. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...

  6. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  7. Evaluation of nuclear power plant component failure probability and core damage probability using simplified PSA model

    International Nuclear Information System (INIS)

    Shimada, Yoshio

    2000-01-01

    It is anticipated that the change of frequency of surveillance tests, preventive maintenance or parts replacement of safety related components may cause the change of component failure probability and result in the change of core damage probability. It is also anticipated that the change is different depending on the initiating event frequency or the component types. This study assessed the change of core damage probability using simplified PSA model capable of calculating core damage probability in a short time period, which is developed by the US NRC to process accident sequence precursors, when various component's failure probability is changed between 0 and 1, or Japanese or American initiating event frequency data are used. As a result of the analysis, (1) It was clarified that frequency of surveillance test, preventive maintenance or parts replacement of motor driven pumps (high pressure injection pumps, residual heat removal pumps, auxiliary feedwater pumps) should be carefully changed, since the core damage probability's change is large, when the base failure probability changes toward increasing direction. (2) Core damage probability change is insensitive to surveillance test frequency change, since the core damage probability change is small, when motor operated valves and turbine driven auxiliary feed water pump failure probability changes around one figure. (3) Core damage probability change is small, when Japanese failure probability data are applied to emergency diesel generator, even if failure probability changes one figure from the base value. On the other hand, when American failure probability data is applied, core damage probability increase is large, even if failure probability changes toward increasing direction. Therefore, when Japanese failure probability data is applied, core damage probability change is insensitive to surveillance tests frequency change etc. (author)

  8. 14 CFR 417.224 - Probability of failure analysis.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure..., must account for launch vehicle failure probability in a consistent manner. A launch vehicle failure...

  9. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  10. Hydra-Ring: a computational framework to combine failure probabilities

    Science.gov (United States)

    Diermanse, Ferdinand; Roscoe, Kathryn; IJmker, Janneke; Mens, Marjolein; Bouwer, Laurens

    2013-04-01

    This presentation discusses the development of a new computational framework for the safety assessment of flood defence systems: Hydra-Ring. Hydra-Ring computes the failure probability of a flood defence system, which is composed of a number of elements (e.g., dike segments, dune segments or hydraulic structures), taking all relevant uncertainties explicitly into account. This is a major step forward in comparison with the current Dutch practice in which the safety assessment is done separately per individual flood defence section. The main advantage of the new approach is that it will result in a more balanced prioratization of required mitigating measures ('more value for money'). Failure of the flood defence system occurs if any element within the system fails. Hydra-Ring thus computes and combines failure probabilities of the following elements: - Failure mechanisms: A flood defence system can fail due to different failure mechanisms. - Time periods: failure probabilities are first computed for relatively small time scales (assessment of flood defense systems, Hydra-Ring can also be used to derive fragility curves, to asses the efficiency of flood mitigating measures, and to quantify the impact of climate change and land subsidence on flood risk. Hydra-Ring is being developed in the context of the Dutch situation. However, the computational concept is generic and the model is set up in such a way that it can be applied to other areas as well. The presentation will focus on the model concept and probabilistic computation techniques.

  11. Probability of Loss of Assured Safety in Systems with Multiple Time-Dependent Failure Modes: Incorporation of Delayed Link Failure in the Presence of Aleatory Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximation and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.

  12. Statistical analysis on failure-to-open/close probability of motor-operated valve in sodium system

    International Nuclear Information System (INIS)

    Kurisaka, Kenichi

    1998-08-01

    The objective of this work is to develop basic data for examination on efficiency of preventive maintenance and actuation test from the standpoint of failure probability. This work consists of a statistical trend analysis of valve failure probability in a failure-to-open/close mode on time since installation and time since last open/close action, based on the field data of operating- and failure-experience. In this work, the terms both dependent and independent on time were considered in the failure probability. The linear aging model was modified and applied to the first term. In this model there are two terms with both failure rates in proportion to time since installation and to time since last open/close-demand. Because of sufficient statistical population, motor-operated valves (MOV's) in sodium system were selected to be analyzed from the CORDS database which contains operating data and failure data of components in the fast reactors and sodium test facilities. According to these data, the functional parameters were statistically estimated to quantify the valve failure probability in a failure-to-open/close mode, with consideration of uncertainty. (J.P.N.)

  13. A method for estimating failure rates for low probability events arising in PSA

    International Nuclear Information System (INIS)

    Thorne, M.C.; Williams, M.M.R.

    1995-01-01

    The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary

  14. Genetic variants of age at menopause are not related to timing of ovarian failure in breast cancer survivors.

    Science.gov (United States)

    Homer, Michael V; Charo, Lindsey M; Natarajan, Loki; Haunschild, Carolyn; Chung, Karine; Mao, Jun J; DeMichele, Angela M; Su, H Irene

    2017-06-01

    To determine if interindividual genetic variation in single-nucleotide polymorphisms (SNPs) related to age at natural menopause is associated with risk of ovarian failure in breast cancer survivors. A prospective cohort of 169 premenopausal breast cancer survivors recruited at diagnosis with stages 0 to III disease were followed longitudinally for menstrual pattern via self-reported daily menstrual diaries. Participants were genotyped for 13 SNPs previously found to be associated with age at natural menopause: EXO1, TLK1, HELQ, UIMC1, PRIM1, POLG, TMEM224, BRSK1, and MCM8. A risk variable summed the total number of risk alleles in each participant. The association between individual genotypes, and also the risk variable, and time to ovarian failure (>12 months of amenorrhea) was tested using time-to-event methods. Median age at enrollment was 40.5 years (range 20.6-46.1). The majority of participants were white (69%) and underwent chemotherapy (76%). Thirty-eight participants (22%) experienced ovarian failure. None of the candidate SNPs or the summary risk variable was significantly associated with time to ovarian failure. Sensitivity analysis restricted to whites or only to participants receiving chemotherapy yielded similar findings. Older age, chemotherapy exposure, and lower body mass index were related to shorter time to ovarian failure. Thirteen previously identified genetic variants associated with time to natural menopause were not related to timing of ovarian failure in breast cancer survivors.

  15. Relevance of control theory to design and maintenance problems in time-variant reliability: The case of stochastic viability

    International Nuclear Information System (INIS)

    Rougé, Charles; Mathias, Jean-Denis; Deffuant, Guillaume

    2014-01-01

    The goal of this paper is twofold: (1) to show that time-variant reliability and a branch of control theory called stochastic viability address similar problems with different points of view, and (2) to demonstrate the relevance of concepts and methods from stochastic viability in reliability problems. On the one hand, reliability aims at evaluating the probability of failure of a system subjected to uncertainty and stochasticity. On the other hand, viability aims at maintaining a controlled dynamical system within a survival set. When the dynamical system is stochastic, this work shows that a viability problem belongs to a specific class of design and maintenance problems in time-variant reliability. Dynamic programming, which is used for solving Markovian stochastic viability problems, then yields the set of design states for which there exists a maintenance strategy which guarantees reliability with a confidence level β for a given period of time T. Besides, it leads to a straightforward computation of the date of the first outcrossing, informing on when the system is most likely to fail. We illustrate this approach with a simple example of population dynamics, including a case where load increases with time. - Highlights: • Time-variant reliability tools cannot devise complex maintenance strategies. • Stochastic viability is a control theory that computes a probability of failure. • Some design and maintenance problems are stochastic viability problems. • Used in viability, dynamic programming can find reliable maintenance actions. • Confronting reliability and control theories such as viability is promising

  16. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  17. Failure frequencies and probabilities applicable to BWR and PWR piping

    International Nuclear Information System (INIS)

    Bush, S.H.; Chockie, A.D.

    1996-03-01

    This report deals with failure probabilities and failure frequencies of nuclear plant piping and the failure frequencies of flanges and bellows. Piping failure probabilities are derived from Piping Reliability Analysis Including Seismic Events (PRAISE) computer code calculations based on fatigue and intergranular stress corrosion as failure mechanisms. Values for both failure probabilities and failure frequencies are cited from several sources to yield a better evaluation of the spread in mean and median values as well as the widths of the uncertainty bands. A general conclusion is that the numbers from WASH-1400 often used in PRAs are unduly conservative. Failure frequencies for both leaks and large breaks tend to be higher than would be calculated using the failure probabilities, primarily because the frequencies are based on a relatively small number of operating years. Also, failure probabilities are substantially lower because of the probability distributions used in PRAISE calculations. A general conclusion is that large LOCA probability values calculated using PRAISE will be quite small, on the order of less than 1E-8 per year (<1E-8/year). The values in this report should be recognized as having inherent limitations and should be considered as estimates and not absolute values. 24 refs 24 refs

  18. Uncertainties and quantification of common cause failure rates and probabilities for system analyses

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2005-01-01

    Simultaneous failures of multiple components due to common causes at random times are modelled by constant multiple-failure rates. A procedure is described for quantification of common cause failure (CCF) basic event probabilities for system models using plant-specific and multiple-plant failure-event data. Methodology is presented for estimating CCF-rates from event data contaminated with assessment uncertainties. Generalised impact vectors determine the moments for the rates of individual systems or plants. These moments determine the effective numbers of events and observation times to be input to a Bayesian formalism to obtain plant-specific posterior CCF-rates. The rates are used to determine plant-specific common cause event probabilities for the basic events of explicit fault tree models depending on test intervals, test schedules and repair policies. Three methods are presented to determine these probabilities such that the correct time-average system unavailability can be obtained with single fault tree quantification. Recommended numerical values are given and examples illustrate different aspects of the methodology

  19. Eco-reliable path finding in time-variant and stochastic networks

    International Nuclear Information System (INIS)

    Li, Wenjie; Yang, Lixing; Wang, Li; Zhou, Xuesong; Liu, Ronghui; Gao, Ziyou

    2017-01-01

    This paper addresses a route guidance problem for finding the most eco-reliable path in time-variant and stochastic networks such that travelers can arrive at the destination with the maximum on-time probability while meeting vehicle emission standards imposed by government regulators. To characterize the dynamics and randomness of transportation networks, the link travel times and emissions are assumed to be time-variant random variables correlated over the entire network. A 0–1 integer mathematical programming model is formulated to minimize the probability of late arrival by simultaneously considering the least expected emission constraint. Using the Lagrangian relaxation approach, the primal model is relaxed into a dualized model which is further decomposed into two simple sub-problems. A sub-gradient method is developed to reduce gaps between upper and lower bounds. Three sets of numerical experiments are tested to demonstrate the efficiency and performance of our proposed model and algorithm. - Highlights: • The most eco-reliable path is defined in time-variant and stochastic networks. • The model is developed with on-time arrival probability and emission constraints. • The sub-gradient and label correcting algorithm are integrated to solve the model. • Numerical experiments demonstrate the effectiveness of developed approaches.

  20. Incorporation of various uncertainties in dependent failure-probability estimation

    International Nuclear Information System (INIS)

    Samanta, P.K.; Mitra, S.P.

    1982-01-01

    This paper describes an approach that allows the incorporation of various types of uncertainties in the estimation of dependent failure (common mode failure) probability. The types of uncertainties considered are attributable to data, modeling and coupling. The method developed is applied to a class of dependent failures, i.e., multiple human failures during testing, maintenance and calibration. Estimation of these failures is critical as they have been shown to be significant contributors to core melt probability in pressurized water reactors

  1. Impact of proof test interval and coverage on probability of failure of safety instrumented function

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Hu, Bin; Wang, Xiaodong

    2016-01-01

    Highlights: • Introduction of proof test coverage makes the calculation of the probability of failure for SIF more accurate. • The probability of failure undetected by proof test is independently defined as P TIF and calculated. • P TIF is quantified using reliability block diagram and simple formula of PFD avg . • Improving proof test coverage and adopting reasonable test period can reduce the probability of failure for SIF. - Abstract: Imperfection of proof test can result in the safety function failure of safety instrumented system (SIS) at any time in its life period. IEC61508 and other references ignored or only elementarily analyzed the imperfection of proof test. In order to further study the impact of the imperfection of proof test on the probability of failure for safety instrumented function (SIF), the necessity of proof test and influence of its imperfection on system performance was first analyzed theoretically. The probability of failure for safety instrumented function resulted from the imperfection of proof test was defined as probability of test independent failures (P TIF ), and P TIF was separately calculated by introducing proof test coverage and adopting reliability block diagram, with reference to the simplified calculation formula of average probability of failure on demand (PFD avg ). Research results show that: the shorter proof test period and the higher proof test coverage indicate the smaller probability of failure for safety instrumented function. The probability of failure for safety instrumented function which is calculated by introducing proof test coverage will be more accurate.

  2. Integrated failure probability estimation based on structural integrity analysis and failure data: Natural gas pipeline case

    International Nuclear Information System (INIS)

    Dundulis, Gintautas; Žutautaitė, Inga; Janulionis, Remigijus; Ušpuras, Eugenijus; Rimkevičius, Sigitas; Eid, Mohamed

    2016-01-01

    In this paper, the authors present an approach as an overall framework for the estimation of the failure probability of pipelines based on: the results of the deterministic-probabilistic structural integrity analysis (taking into account loads, material properties, geometry, boundary conditions, crack size, and defected zone thickness), the corrosion rate, the number of defects and failure data (involved into the model via application of Bayesian method). The proposed approach is applied to estimate the failure probability of a selected part of the Lithuanian natural gas transmission network. The presented approach for the estimation of integrated failure probability is a combination of several different analyses allowing us to obtain: the critical crack's length and depth, the failure probability of the defected zone thickness, dependency of the failure probability on the age of the natural gas transmission pipeline. A model's uncertainty analysis and uncertainty propagation analysis are performed, as well. - Highlights: • Degradation mechanisms of natural gas transmission pipelines. • Fracture mechanic analysis of the pipe with crack. • Stress evaluation of the pipe with critical crack. • Deterministic-probabilistic structural integrity analysis of gas pipeline. • Integrated estimation of pipeline failure probability by Bayesian method.

  3. Determination of the failure probability in the weld region of ap-600 vessel for transient condition

    International Nuclear Information System (INIS)

    Wahyono, I.P.

    1997-01-01

    Failure probability in the weld region of AP-600 vessel was determined for transient condition scenario. The type of transient is increase of the heat removal from primary cooling system due to sudden opening of safety valves or steam relief valves on the secondary cooling system or the steam generator. Temperature and pressure in the vessel was considered as the base of deterministic calculation of the stress intensity factor. Calculation of film coefficient of the convective heat transfers is a function of the transient time and water parameter. Pressure, material temperature, flaw depth and transient time are variables for the stress intensity factor. Failure probability consideration was done by using the above information in regard with the flaw and probability distributions of Octavia II and Marshall. Calculation of the failure probability by probability fracture mechanic simulation is applied on the weld region. Failure of the vessel is assumed as a failure of the weld material with one crack which stress intensity factor applied is higher than the critical stress intensity factor. VISA II code (Vessel Integrity Simulation Analysis II) was used for deterministic calculation and simulation. Failure probability of the material is 1.E-5 for Octavia II distribution and 4E-6 for marshall distribution for each transient event postulated. The failure occurred at the 1.7th menit of the initial transient under 12.53 ksi of the pressure

  4. Pipe failure probability - the Thomas paper revisited

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    2000-01-01

    Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas

  5. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current US innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery

  6. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  7. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current U.S. innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery. (orig.)

  8. Finding upper bounds for software failure probabilities - experiments and results

    International Nuclear Information System (INIS)

    Kristiansen, Monica; Winther, Rune

    2005-09-01

    This report looks into some aspects of using Bayesian hypothesis testing to find upper bounds for software failure probabilities. In the first part, the report evaluates the Bayesian hypothesis testing approach for finding upper bounds for failure probabilities of single software components. The report shows how different choices of prior probability distributions for a software component's failure probability influence the number of tests required to obtain adequate confidence in a software component. In the evaluation, both the effect of the shape of the prior distribution as well as one's prior confidence in the software component were investigated. In addition, different choices of prior probability distributions are discussed based on their relevance in a software context. In the second part, ideas on how the Bayesian hypothesis testing approach can be extended to assess systems consisting of multiple software components are given. One of the main challenges when assessing systems consisting of multiple software components is to include dependency aspects in the software reliability models. However, different types of failure dependencies between software components must be modelled differently. Identifying different types of failure dependencies are therefore an important condition for choosing a prior probability distribution, which correctly reflects one's prior belief in the probability for software components failing dependently. In this report, software components include both general in-house software components, as well as pre-developed software components (e.g. COTS, SOUP, etc). (Author)

  9. Quantifying evolutionary dynamics from variant-frequency time series

    Science.gov (United States)

    Khatri, Bhavin S.

    2016-09-01

    From Kimura’s neutral theory of protein evolution to Hubbell’s neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher’s angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.

  10. Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel

    Science.gov (United States)

    Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung

    2017-04-01

    The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.

  11. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  12. Human error recovery failure probability when using soft controls in computerized control rooms

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Wondea

    2014-01-01

    Many literatures categorized recovery process into three phases; detection of problem situation, explanation of problem causes or countermeasures against problem, and end of recovery. Although the focus of recovery promotion has been on categorizing recovery phases and modeling recovery process, research related to human recovery failure probabilities has not been perform actively. On the other hand, a few study regarding recovery failure probabilities were implemented empirically. Summarizing, researches that have performed so far have several problems in terms of use in human reliability analysis (HRA). By adopting new human-system interfaces that are based on computer-based technologies, the operation environment of MCRs in NPPs has changed from conventional MCRs to advanced MCRs. Because of the different interfaces between conventional and advanced MCRs, different recovery failure probabilities should be considered in the HRA for advanced MCRs. Therefore, this study carries out an empirical analysis of human error recovery probabilities under an advanced MCR mockup called compact nuclear simulator (CNS). The aim of this work is not only to compile a recovery failure probability database using the simulator for advanced MCRs but also to collect recovery failure probability according to defined human error modes to compare that which human error mode has highest recovery failure probability. The results show that recovery failure probability regarding wrong screen selection was lowest among human error modes, which means that most of human error related to wrong screen selection can be recovered. On the other hand, recovery failure probabilities of operation selection omission and delayed operation were 1.0. These results imply that once subject omitted one task in the procedure, they have difficulties finding and recovering their errors without supervisor's assistance. Also, wrong screen selection had an effect on delayed operation. That is, wrong screen

  13. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  14. A statistical analysis on failure-to open/close probability of pneumatic valve in sodium cooling systems

    International Nuclear Information System (INIS)

    Kurisaka, Kenichi

    1999-11-01

    The objective of this study is to develop fundamental data for examination on efficiency of preventive maintenance and surveillance test from the standpoint of failure probability. In this study, as a major standby component, a pneumatic valve in sodium cooling systems was selected. A statistical analysis was made about a trend of valve in sodium cooling systems was selected. A statistical analysis was made about a trend of valve failure-to-open/close (FTOC) probability depending on number of demands ('n'), time since installation ('t') and standby time since last open/close action ('T'). The analysis is based on the field data of operating- and failure-experiences stored in the Component Reliability Database and Statistical Analysis System for LMFBR's (CORDS). In the analysis, the FTOC probability ('P') was expressed as follows: P=1-exp{-C-En-F/n-λT-aT(t-T/2)-AT 2 /2}. The functional parameters, 'C', 'E', 'F', 'λ', 'a' and 'A', were estimated with the maximum likelihood estimation method. As a result, the FTOC probability is almost expressed with the failure probability being derived from the failure rate under assumption of the Poisson distribution only when valve cycle (i.e. open-close-open cycle) exceeds about 100 days. When the valve cycle is shorter than about 100 days, the FTOC probability can be adequately estimated with the parameter model proposed in this study. The results obtained from this study may make it possible to derive an adequate frequency of surveillance test for a given target of the FTOC probability. (author)

  15. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    Science.gov (United States)

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  16. Estimation of component failure probability from masked binomial system testing data

    International Nuclear Information System (INIS)

    Tan Zhibin

    2005-01-01

    The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization

  17. Sensitivity analysis on the effect of software-induced common cause failure probability in the computer-based reactor trip system unavailability

    International Nuclear Information System (INIS)

    Kamyab, Shahabeddin; Nematollahi, Mohammadreza; Shafiee, Golnoush

    2013-01-01

    Highlights: ► Importance and sensitivity analysis has been performed for a digitized reactor trip system. ► The results show acceptable trip unavailability, for software failure probabilities below 1E −4 . ► However, the value of Fussell–Vesley indicates that software common cause failure is still risk significant. ► Diversity and effective test is founded beneficial to reduce software contribution. - Abstract: The reactor trip system has been digitized in advanced nuclear power plants, since the programmable nature of computer based systems has a number of advantages over non-programmable systems. However, software is still vulnerable to common cause failure (CCF). Residual software faults represent a CCF concern, which threat the implemented achievements. This study attempts to assess the effectiveness of so-called defensive strategies against software CCF with respect to reliability. Sensitivity analysis has been performed by re-quantifying the models upon changing the software failure probability. Importance measures then have been estimated in order to reveal the specific contribution of software CCF in the trip failure probability. The results reveal the importance and effectiveness of signal and software diversity as applicable strategies to ameliorate inefficiencies due to software CCF in the reactor trip system (RTS). No significant change has been observed in the rate of RTS failure probability for the basic software CCF greater than 1 × 10 −4 . However, the related Fussell–Vesley has been greater than 0.005, for the lower values. The study concludes that consideration of risk associated with the software based systems is a multi-variant function which requires compromising among them in more precise and comprehensive studies

  18. Main factors for fatigue failure probability of pipes subjected to fluid thermal fluctuation

    International Nuclear Information System (INIS)

    Machida, Hideo; Suzuki, Masaaki; Kasahara, Naoto

    2015-01-01

    It is very important to grasp failure probability and failure mode appropriately to carry out risk reduction measures of nuclear power plants. To clarify the important factors for failure probability and failure mode of pipes subjected to fluid thermal fluctuation, failure probability analyses were performed by changing the values of a stress range, stress ratio, stress components and threshold of stress intensity factor range. The important factors for the failure probability are range, stress ratio (mean stress condition) and threshold of stress intensity factor range. The important factor for the failure mode is a circumferential angle range of fluid thermal fluctuation. When a large fluid thermal fluctuation acts on the entire circumferential surface of the pipe, the probability of pipe breakage increases, calling for measures to prevent such a failure and reduce the risk to the plant. When the circumferential angle subjected to fluid thermal fluctuation is small, the failure mode of piping is leakage and the corrective maintenance might be applicable from the viewpoint of risk to the plant. (author)

  19. Failure probability analysis on mercury target vessel

    International Nuclear Information System (INIS)

    Ishikura, Syuichi; Futakawa, Masatoshi; Kogawa, Hiroyuki; Sato, Hiroshi; Haga, Katsuhiro; Ikeda, Yujiro

    2005-03-01

    Failure probability analysis was carried out to estimate the lifetime of the mercury target which will be installed into the JSNS (Japan spallation neutron source) in J-PARC (Japan Proton Accelerator Research Complex). The lifetime was estimated as taking loading condition and materials degradation into account. Considered loads imposed on the target vessel were the static stresses due to thermal expansion and static pre-pressure on He-gas and mercury and the dynamic stresses due to the thermally shocked pressure waves generated repeatedly at 25 Hz. Materials used in target vessel will be degraded by the fatigue, neutron and proton irradiation, mercury immersion and pitting damages, etc. The imposed stresses were evaluated through static and dynamic structural analyses. The material-degradations were deduced based on published experimental data. As a result, it was quantitatively confirmed that the failure probability for the lifetime expected in the design is very much lower, 10 -11 in the safety hull, meaning that it will be hardly failed during the design lifetime. On the other hand, the beam window of mercury vessel suffered with high-pressure waves exhibits the failure probability of 12%. It was concluded, therefore, that the leaked mercury from the failed area at the beam window is adequately kept in the space between the safety hull and the mercury vessel by using mercury-leakage sensors. (author)

  20. Sensitivity of the probability of failure to probability of detection curve regions

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2016-01-01

    Non-destructive inspection (NDI) techniques have been shown to play a vital role in fracture control plans, structural health monitoring, and ensuring availability and reliability of piping, pressure vessels, mechanical and aerospace equipment. Probabilistic fatigue simulations are often used in order to determine the efficacy of an inspection procedure with the NDI method modeled as a probability of detection (POD) curve. These simulations can be used to determine the most advantageous NDI method for a given application. As an aid to this process, a first order sensitivity method of the probability-of-failure (POF) with respect to regions of the POD curve (lower tail, middle region, right tail) is developed and presented here. The sensitivity method computes the partial derivative of the POF with respect to a change in each region of a POD or multiple POD curves. The sensitivities are computed at no cost by reusing the samples from an existing Monte Carlo (MC) analysis. A numerical example is presented considering single and multiple inspections. - Highlights: • Sensitivities of probability-of-failure to a region of probability-of-detection curve. • The sensitivities are computed with negligible cost. • Sensitivities identify the important region of a POD curve. • Sensitivities can be used as a guide to selecting the optimal POD curve.

  1. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  2. Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid

    2012-01-01

    This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...

  3. Failure probability analyses for PWSCC in Ni-based alloy welds

    International Nuclear Information System (INIS)

    Udagawa, Makoto; Katsuyama, Jinya; Onizawa, Kunio; Li, Yinsheng

    2015-01-01

    A number of cracks due to primary water stress corrosion cracking (PWSCC) in pressurized water reactors and Ni-based alloy stress corrosion cracking (NiSCC) in boiling water reactors have been detected around Ni-based alloy welds. The causes of crack initiation and growth due to stress corrosion cracking include weld residual stress, operating stress, the materials, and the environment. We have developed the analysis code PASCAL-NP for calculating the failure probability and assessment of the structural integrity of cracked components on the basis of probabilistic fracture mechanics (PFM) considering PWSCC and NiSCC. This PFM analysis code has functions for calculating the incubation time of PWSCC and NiSCC crack initiation, evaluation of crack growth behavior considering certain crack location and orientation patterns, and evaluation of failure behavior near Ni-based alloy welds due to PWSCC and NiSCC in a probabilistic manner. Herein, actual plants affected by PWSCC have been analyzed using PASCAL-NP. Failure probabilities calculated by PASCAL-NP are in reasonable agreement with the detection data. Furthermore, useful knowledge related to leakage due to PWSCC was obtained through parametric studies using this code

  4. Failure probability of PWR reactor coolant loop piping

    International Nuclear Information System (INIS)

    Lo, T.; Woo, H.H.; Holman, G.S.; Chou, C.K.

    1984-02-01

    This paper describes the results of assessments performed on the PWR coolant loop piping of Westinghouse and Combustion Engineering plants. For direct double-ended guillotine break (DEGB), consideration was given to crack existence probability, initial crack size distribution, hydrostatic proof test, preservice inspection, leak detection probability, crack growth characteristics, and failure criteria based on the net section stress failure and tearing modulus stability concept. For indirect DEGB, fragilities of major component supports were estimated. The system level fragility was then calculated based on the Boolean expression involving these fragilities. Indirect DEGB due to seismic effects was calculated by convolving the system level fragility and the seismic hazard curve. The results indicate that the probability of occurrence of both direct and indirect DEGB is extremely small, thus, postulation of DEGB in design should be eliminated and replaced by more realistic criteria

  5. Heterozygous RTEL1 variants in bone marrow failure and myeloid neoplasms.

    Science.gov (United States)

    Marsh, Judith C W; Gutierrez-Rodrigues, Fernanda; Cooper, James; Jiang, Jie; Gandhi, Shreyans; Kajigaya, Sachiko; Feng, Xingmin; Ibanez, Maria Del Pilar F; Donaires, Flávia S; Lopes da Silva, João P; Li, Zejuan; Das, Soma; Ibanez, Maria; Smith, Alexander E; Lea, Nicholas; Best, Steven; Ireland, Robin; Kulasekararaj, Austin G; McLornan, Donal P; Pagliuca, Anthony; Callebaut, Isabelle; Young, Neal S; Calado, Rodrigo T; Townsley, Danielle M; Mufti, Ghulam J

    2018-01-09

    Biallelic germline mutations in RTEL1 (regulator of telomere elongation helicase 1) result in pathologic telomere erosion and cause dyskeratosis congenita. However, the role of RTEL1 mutations in other bone marrow failure (BMF) syndromes and myeloid neoplasms, and the contribution of monoallelic RTEL1 mutations to disease development are not well defined. We screened 516 patients for germline mutations in telomere-associated genes by next-generation sequencing in 2 independent cohorts; one constituting unselected patients with idiopathic BMF, unexplained cytopenia, or myeloid neoplasms (n = 457) and a second cohort comprising selected patients on the basis of the suspicion of constitutional/familial BMF (n = 59). Twenty-three RTEL1 variants were identified in 27 unrelated patients from both cohorts: 7 variants were likely pathogenic, 13 were of uncertain significance, and 3 were likely benign. Likely pathogenic RTEL1 variants were identified in 9 unrelated patients (7 heterozygous and 2 biallelic). Most patients were suspected to have constitutional BMF, which included aplastic anemia (AA), unexplained cytopenia, hypoplastic myelodysplastic syndrome, and macrocytosis with hypocellular bone marrow. In the other 18 patients, RTEL1 variants were likely benign or of uncertain significance. Telomeres were short in 21 patients (78%), and 3' telomeric overhangs were significantly eroded in 4. In summary, heterozygous RTEL1 variants were associated with marrow failure, and telomere length measurement alone may not identify patients with telomere dysfunction carrying RTEL1 variants. Pathogenicity assessment of heterozygous RTEL1 variants relied on a combination of clinical, computational, and functional data required to avoid misinterpretation of common variants.

  6. A probability model for the failure of pressure containing parts

    International Nuclear Information System (INIS)

    Thomas, H.M.

    1978-01-01

    The model provides a method of estimating the order of magnitude of the leakage failure probability of pressure containing parts. It is a fatigue based model which makes use of the statistics available for both specimens and vessels. Some novel concepts are introduced but essentially the model simply quantifies the obvious i.e. that failure probability increases with increases in stress levels, number of cycles, volume of material and volume of weld metal. A further model based on fracture mechanics estimates the catastrophic fraction of leakage failures. (author)

  7. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  8. Probabilistic analysis of Millstone Unit 3 ultimate containment failure probability given high pressure: Chapter 14

    International Nuclear Information System (INIS)

    Bickel, J.H.

    1983-01-01

    The quantification of the containment event trees in the Millstone Unit 3 Probabilistic Safety Study utilizes a conditional probability of failure given high pressure which is based on a new approach. The generation of this conditional probability was based on a weakest link failure mode model which considered contributions from a number of overlapping failure modes. This overlap effect was due to a number of failure modes whose mean failure pressures were clustered within a 5 psi range and which had uncertainties due to variances in material strengths and analytical uncertainties which were between 9 and 15 psi. Based on a review of possible probability laws to describe the failure probability of individual structural failure modes, it was determined that a Weibull probability law most adequately described the randomness in the physical process of interest. The resultant conditional probability of failure is found to have a median failure pressure of 132.4 psia. The corresponding 5-95 percentile values are 112 psia and 146.7 psia respectively. The skewed nature of the conditional probability of failure vs. pressure results in a lower overall containment failure probability for an appreciable number of the severe accident sequences of interest, but also probabilities which are more rigorously traceable from first principles

  9. The Use of Conditional Probability Integral Transformation Method for Testing Accelerated Failure Time Models

    Directory of Open Access Journals (Sweden)

    Abdalla Ahmed Abdel-Ghaly

    2016-06-01

    Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.

  10. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Yoshitaka; Ohtani, Masanori [Institute of Nuclear Safety System, Inc., Mihama, Fukui (Japan); Fujita, Yushi [TECNOVA Corp., Tokyo (Japan)

    2002-09-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  11. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Ohtani, Masanori; Fujita, Yushi

    2002-01-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  12. Estimation of probability of failure for damage-tolerant aerospace structures

    Science.gov (United States)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This

  13. Apolipoprotein L1 gene variants in deceased organ donors are associated with renal allograft failure.

    Science.gov (United States)

    Freedman, B I; Julian, B A; Pastan, S O; Israni, A K; Schladt, D; Gautreaux, M D; Hauptfeld, V; Bray, R A; Gebel, H M; Kirk, A D; Gaston, R S; Rogers, J; Farney, A C; Orlando, G; Stratta, R J; Mohan, S; Ma, L; Langefeld, C D; Hicks, P J; Palmer, N D; Adams, P L; Palanisamy, A; Reeves-Daniel, A M; Divers, J

    2015-06-01

    Apolipoprotein L1 gene (APOL1) nephropathy variants in African American deceased kidney donors were associated with shorter renal allograft survival in a prior single-center report. APOL1 G1 and G2 variants were genotyped in newly accrued DNA samples from African American deceased donors of kidneys recovered and/or transplanted in Alabama and North Carolina. APOL1 genotypes and allograft outcomes in subsequent transplants from 55 U.S. centers were linked, adjusting for age, sex and race/ethnicity of recipients, HLA match, cold ischemia time, panel reactive antibody levels, and donor type. For 221 transplantations from kidneys recovered in Alabama, there was a statistical trend toward shorter allograft survival in recipients of two-APOL1-nephropathy-variant kidneys (hazard ratio [HR] 2.71; p = 0.06). For all 675 kidneys transplanted from donors at both centers, APOL1 genotype (HR 2.26; p = 0.001) and African American recipient race/ethnicity (HR 1.60; p = 0.03) were associated with allograft failure. Kidneys from African American deceased donors with two APOL1 nephropathy variants reproducibly associate with higher risk for allograft failure after transplantation. These findings warrant consideration of rapidly genotyping deceased African American kidney donors for APOL1 risk variants at organ recovery and incorporation of results into allocation and informed-consent processes. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.

  14. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    Science.gov (United States)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  15. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    Science.gov (United States)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  16. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    Energy Technology Data Exchange (ETDEWEB)

    Garza, J. [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States); Millwater, H., E-mail: harry.millwater@utsa.edu [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States)

    2012-04-15

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: Black-Right-Pointing-Pointer Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. Black-Right-Pointing-Pointer The sensitivities are computed with negligible cost using Monte Carlo sampling. Black-Right-Pointing-Pointer The change in the POF due to a change in the POD curve parameters can be easily estimated.

  17. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2012-01-01

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: ► Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. ►The sensitivities are computed with negligible cost using Monte Carlo sampling. ► The change in the POF due to a change in the POD curve parameters can be easily estimated.

  18. An analysis of the annual probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP)

    Energy Technology Data Exchange (ETDEWEB)

    Greenfield, M.A. [Univ. of California, Los Angeles, CA (United States); Sargent, T.J.

    1995-11-01

    The Environmental Evaluation Group (EEG) previously analyzed the probability of a catastrophic accident in the waste hoist of the Waste Isolation Pilot Plant (WIPP) and published the results in Greenfield (1990; EEG-44) and Greenfield and Sargent (1993; EEG-53). The most significant safety element in the waste hoist is the hydraulic brake system, whose possible failure was identified in these studies as the most important contributor in accident scenarios. Westinghouse Electric Corporation, Waste Isolation Division has calculated the probability of an accident involving the brake system based on studies utilizing extensive fault tree analyses. This analysis conducted for the U.S. Department of Energy (DOE) used point estimates to describe the probability of failure and includes failure rates for the various components comprising the brake system. An additional controlling factor in the DOE calculations is the mode of operation of the brake system. This factor enters for the following reason. The basic failure rate per annum of any individual element is called the Event Probability (EP), and is expressed as the probability of failure per annum. The EP in turn is the product of two factors. One is the {open_quotes}reported{close_quotes} failure rate, usually expressed as the probability of failure per hour and the other is the expected number of hours that the element is in use, called the {open_quotes}mission time{close_quotes}. In many instances the {open_quotes}mission time{close_quotes} will be the number of operating hours of the brake system per annum. However since the operation of the waste hoist system includes regular {open_quotes}reoperational check{close_quotes} tests, the {open_quotes}mission time{close_quotes} for standby components is reduced in accordance with the specifics of the operational time table.

  19. Failure-probability driven dose painting

    International Nuclear Information System (INIS)

    Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.; Aznar, Marianne C.; Kristensen, Claus A.; Rasmussen, Jacob; Specht, Lena; Berthelsen, Anne K.; Bentzen, Søren M.

    2013-01-01

    Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). The total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity

  20. Calculating failure probabilities for TRISO-coated fuel particles using an integral formulation

    International Nuclear Information System (INIS)

    Miller, Gregory K.; Maki, John T.; Knudson, Darrell L.; Petti, David A.

    2010-01-01

    The fundamental design for a gas-cooled reactor relies on the safe behavior of the coated particle fuel. The coating layers surrounding the fuel kernels in these spherical particles, termed the TRISO coating, act as a pressure vessel that retains fission products. The quality of the fuel is reflected in the number of particle failures that occur during reactor operation, where failed particles become a source for fission products that can then diffuse through the fuel element. The failure probability for any batch of particles, which has traditionally been calculated using the Monte Carlo method, depends on statistical variations in design parameters and on variations in the strengths of coating layers among particles in the batch. An alternative approach to calculating failure probabilities is developed herein that uses direct numerical integration of a failure probability integral. Because this is a multiple integral where the statistically varying parameters become integration variables, a fast numerical integration approach is also developed. In sample cases analyzed involving multiple failure mechanisms, results from the integration methods agree closely with Monte Carlo results. Additionally, the fast integration approach, particularly, is shown to significantly improve efficiency of failure probability calculations. These integration methods have been implemented in the PARFUME fuel performance code along with the Monte Carlo method, where each serves to verify accuracy of the others.

  1. Application of a few orthogonal polynomials to the assessment of the fracture failure probability of a spherical tank

    International Nuclear Information System (INIS)

    Cao Tianjie; Zhou Zegong

    1993-01-01

    This paper presents some methods to assess the fracture failure probability of a spherical tank. These methods convert the assessment of the fracture failure probability into the calculation of the moment of cracks and a one-dimensional integral. In the paper, we first derive series' formulae to calculation the moments of cracks on the occasion of the crack fatigue growth and the moments of crack opening displacements according to JWES-2805 code. We then use the first n moments of crack opening displacements and a few orthogonal polynomials to compose the probability density function of the crack opening displacement. Lastly, the fracture failure probability is obtained according to the interference theory. An example proves that these methods are simpler, quicker, and more accurate. At the same time, these methods avoid the disadvantage of Edgeworth's series method. (author)

  2. Most Probable Failures in LHC Magnets and Time Constants of their Effects on the Beam.

    CERN Document Server

    Gomez Alonso, Andres

    2006-01-01

    During the LHC operation, energies up to 360 MJ will be stored in each proton beam and over 10 GJ in the main electrical circuits. With such high energies, beam losses can quickly lead to important equipment damage. The Machine Protection Systems have been designed to provide reliable protection of the LHC through detection of the failures leading to beam losses and fast dumping of the beams. In order to determine the protection strategies, it is important to know the time constants of the failure effects on the beam. In this report, we give an estimation of the time constants of quenches and powering failures in LHC magnets. The most critical failures are powering failures in certain normal conducting circuits, leading to relevant effects on the beam in ~1 ms. The failures on super conducting magnets leading to fastest losses are quenches. In this case, the effects on the beam can be signficant ~10 ms after the quench occurs.

  3. Input-profile-based software failure probability quantification for safety signal generation systems

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Lim, Ho Gon; Lee, Ho Jung; Kim, Man Cheol; Jang, Seung Cheol

    2009-01-01

    The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.

  4. Balancing burn-in and mission times in environments with catastrophic and repairable failures

    International Nuclear Information System (INIS)

    Bebbington, Mark; Lai, C.-D.; Zitikis, Ricardas

    2009-01-01

    In a system subject to both repairable and catastrophic (i.e., nonrepairable) failures, 'mission success' can be defined as operating for a specified time without a catastrophic failure. We examine the effect of a burn-in process of duration τ on the mission time x, and also on the probability of mission success, by introducing several functions and surfaces on the (τ,x)-plane whose extrema represent suitable choices for the best burn-in time, and the best burn-in time for a desired mission time. The corresponding curvature functions and surfaces provide information about probabilities and expectations related to these burn-in and mission times. Theoretical considerations are illustrated with both parametric and, separating the failures by failure mode, nonparametric analyses of a data set, and graphical visualization of results.

  5. Evaluations of Structural Failure Probabilities and Candidate Inservice Inspection Programs

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Simonen, Fredric A.

    2009-05-01

    The work described in this report applies probabilistic structural mechanics models to predict the reliability of nuclear pressure boundary components. These same models are then applied to evaluate the effectiveness of alternative programs for inservice inspection to reduce these failure probabilities. Results of the calculations support the development and implementation of risk-informed inservice inspection of piping and vessels. Studies have specifically addressed the potential benefits of ultrasonic inspections to reduce failure probabilities associated with fatigue crack growth and stress-corrosion cracking. Parametric calculations were performed with the computer code pc-PRAISE to generate an extensive set of plots to cover a wide range of pipe wall thicknesses, cyclic operating stresses, and inspection strategies. The studies have also addressed critical inputs to fracture mechanics calculations such as the parameters that characterize the number and sizes of fabrication flaws in piping welds. Other calculations quantify uncertainties associated with the inputs calculations, the uncertainties in the fracture mechanics models, and the uncertainties in the resulting calculated failure probabilities. A final set of calculations address the effects of flaw sizing errors on the effectiveness of inservice inspection programs.

  6. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  7. The influence of frequency and reliability of in-service inspection on reactor pressure vessel disruptive failure probability

    International Nuclear Information System (INIS)

    Jordan, G.M.

    1977-01-01

    A simple probabilistic methodology is used to investigate the benefit, in terms of reduction of disruptive failure probability, which comes from the application of periodic In Service Inspection (ISI) to nuclear pressure vessels. The analysis indicates the strong interaction between inspection benefit and the intrinsic quality of the structure. In order to quantify the inspection benefit, assumptions are made which allow the quality to be characterized in terms of the parameters governing a Log Normal distribution of time-to-failure. Using these assumptions, it is shown that the overall benefit of ISI is unlikely to exceed an order of magnitude in terms of reduction of disruptive failure probability. The method is extended to evaluate the effect of the periodicity and reliability of the inspection process itself. (author)

  8. The influence of frequency and reliability of in-service inspection on reactor pressure vessel disruptive failure probability

    International Nuclear Information System (INIS)

    Jordan, G.M.

    1978-01-01

    A simple probabilistic methodology is used to investigate the benefit, in terms of reduction of disruptive failure probability, which comes from the application of periodic in-service inspection to nuclear pressure vessels. The analysis indicates the strong interaction between inspection benefit and the intrinsic quality of the structure. In order to quantify the inspection benefit, assumptions are made which allow the quality to be characterised in terms of the parameters governing a log normal distribution of time - to - failure. Using these assumptions, it is shown that the overall benefit of in-service inspection unlikely to exceed an order of magnitude in terms of reduction of disruptive failure probability. The method is extended to evaluate the effect of the periodicity and reliability of the inspection process itself. (author)

  9. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  10. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  11. Reactor materials program process water component failure probability

    International Nuclear Information System (INIS)

    Daugherty, W. L.

    1988-01-01

    The maximum rate loss of coolant accident for the Savannah River Production Reactors is presently specified as the abrupt double-ended guillotine break (DEGB) of a large process water pipe. This accident is not considered credible in light of the low applied stresses and the inherent ductility of the piping materials. The Reactor Materials Program was initiated to provide the technical basis for an alternate, credible maximum rate LOCA. The major thrust of this program is to develop an alternate worst case accident scenario by deterministic means. In addition, the probability of a DEGB is also being determined; to show that in addition to being mechanistically incredible, it is also highly improbable. The probability of a DEGB of the process water piping is evaluated in two parts: failure by direct means, and indirectly-induced failure. These two areas have been discussed in other reports. In addition, the frequency of a large bread (equivalent to a DEGB) in other process water system components is assessed. This report reviews the large break frequency for each component as well as the overall large break frequency for the reactor system

  12. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  13. Evaluation of containment failure and cleanup time for Pu shots on the Z machine.

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L.

    2010-02-01

    Between November 30 and December 11, 2009 an evaluation was performed of the probability of containment failure and the time for cleanup of contamination of the Z machine given failure, for plutonium (Pu) experiments on the Z machine at Sandia National Laboratories (SNL). Due to the unique nature of the problem, there is little quantitative information available for the likelihood of failure of containment components or for the time to cleanup. Information for the evaluation was obtained from Subject Matter Experts (SMEs) at the Z machine facility. The SMEs provided the State of Knowledge (SOK) for the evaluation. There is significant epistemic- or state of knowledge- uncertainty associated with the events that comprise both failure of containment and cleanup. To capture epistemic uncertainty and to allow the SMEs to reason at the fidelity of the SOK, we used the belief/plausibility measure of uncertainty for this evaluation. We quantified two variables: the probability that the Pu containment system fails given a shot on the Z machine, and the time to cleanup Pu contamination in the Z machine given failure of containment. We identified dominant contributors for both the time to cleanup and the probability of containment failure. These results will be used by SNL management to decide the course of action for conducting the Pu experiments on the Z machine.

  14. Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles

    Science.gov (United States)

    Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey

    2013-09-01

    Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the

  15. Failure detection system risk reduction assessment

    Science.gov (United States)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  16. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  17. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    International Nuclear Information System (INIS)

    Duffy, Stephen

    2013-01-01

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  18. Determination of bounds on failure probability in the presence of ...

    Indian Academy of Sciences (India)

    In particular, fuzzy set theory provides a more rational framework for ..... indicating that the random variations inT andO2 do not affect failure probability significantly. ... The upper-bound for PF shown in figure 6 can be used in decision-making.

  19. Approximative determination of failure probabilities in probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    Riesch-Oppermann, H.; Brueckner, A.

    1987-01-01

    The possibility of using FORM in probabilistic fracture mechanics (PFM) is investigated. After a short review of the method and a description of some specific problems occurring in PFM applications, results obtained with FORM for the failure probabilities in a typical PFM problem (fatigue crack growth) are compared with those determined by a Monte Carlo simulation. (orig./HP)

  20. Integrating Preventive Maintenance Scheduling As Probability Machine Failure And Batch Production Scheduling

    Directory of Open Access Journals (Sweden)

    Zahedi Zahedi

    2016-06-01

    Full Text Available This paper discusses integrated model of batch production scheduling and machine maintenance scheduling. Batch production scheduling uses minimize total actual flow time criteria and machine maintenance scheduling uses the probability of machine failure based on Weibull distribution. The model assumed no nonconforming parts in a planning horizon. The model shows an increase in the number of the batch (length of production run up to a certain limit will minimize the total actual flow time. Meanwhile, an increase in the length of production run will implicate an increase in the number of PM. An example was given to show how the model and algorithm work.

  1. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    Science.gov (United States)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  2. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    Science.gov (United States)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  3. Reactor Materials Program probability of indirectly--induced failure of L and P reactor process water piping

    International Nuclear Information System (INIS)

    Daugherty, W.L.

    1988-01-01

    The design basis accident for the Savannah River Production Reactors is the abrupt double-ended guillotine break (DEGB) of a large process water pipe. This accident is not considered credible in light of the low applied stresses and the inherent ductility of the piping material. The Reactor Materials Program was initiated to provide the technical basis for an alternate credible design basis accident. One aspect of this work is to determine the probability of the DEGB; to show that in addition to being incredible, it is also highly improbable. The probability of a DEGB is broken into two parts: failure by direct means, and indirectly-induced failure. Failure of the piping by direct means can only be postulated to occur if an undetected crack grows to the point of instability, causing a large pipe break. While this accident is not as severe as a DEGB, it provides a conservative upper bound on the probability of a direct DEGB of the piping. The second part of this evaluation calculates the probability of piping failure by indirect causes. Indirect failure of the piping can be triggered by an earthquake which causes other reactor components or the reactor building to fall on the piping or pull it from its supports. Since indirectly-induced failure of the piping will not always produce consequences as severe as a DEGB, this gives a conservative estimate of the probability of an indirectly- induced DEGB. This second part, indirectly-induced pipe failure, is the subject of this report. Failure by seismic loads in the piping itself will be covered in a separate report on failure by direct causes. This report provides a detailed evaluation of L reactor. A walkdown of P reactor and an analysis of the P reactor building provide the basis for extending the L reactor results to P reactor

  4. Modelling the impact of creep on the probability of failure of a solid oxidefuel cell stack

    DEFF Research Database (Denmark)

    Greco, Fabio; Frandsen, Henrik Lund; Nakajo, Arata

    2014-01-01

    In solid oxide fuel cell (SOFC) technology a major challenge lies in balancing thermal stresses from an inevitable thermal field. The cells are known to creep, changing over time the stress field. The main objective of this study was to assess the influence of creep on the failure probability of ...

  5. An analysis of the annual probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP)

    International Nuclear Information System (INIS)

    Greenfield, M.A.; Sargent, T.J.

    1995-11-01

    The Environmental Evaluation Group (EEG) previously analyzed the probability of a catastrophic accident in the waste hoist of the Waste Isolation Pilot Plant (WIPP) and published the results in Greenfield (1990; EEG-44) and Greenfield and Sargent (1993; EEG-53). The most significant safety element in the waste hoist is the hydraulic brake system, whose possible failure was identified in these studies as the most important contributor in accident scenarios. Westinghouse Electric Corporation, Waste Isolation Division has calculated the probability of an accident involving the brake system based on studies utilizing extensive fault tree analyses. This analysis conducted for the U.S. Department of Energy (DOE) used point estimates to describe the probability of failure and includes failure rates for the various components comprising the brake system. An additional controlling factor in the DOE calculations is the mode of operation of the brake system. This factor enters for the following reason. The basic failure rate per annum of any individual element is called the Event Probability (EP), and is expressed as the probability of failure per annum. The EP in turn is the product of two factors. One is the open-quotes reportedclose quotes failure rate, usually expressed as the probability of failure per hour and the other is the expected number of hours that the element is in use, called the open-quotes mission timeclose quotes. In many instances the open-quotes mission timeclose quotes will be the number of operating hours of the brake system per annum. However since the operation of the waste hoist system includes regular open-quotes reoperational checkclose quotes tests, the open-quotes mission timeclose quotes for standby components is reduced in accordance with the specifics of the operational time table

  6. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    Science.gov (United States)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  7. The probability of containment failure by direct containment heating in Zion

    International Nuclear Information System (INIS)

    Pilch, M.M.; Yan, H.; Theofanous, T.G.

    1994-12-01

    This report is the first step in the resolution of the Direct Containment Heating (DCH) issue for the Zion Nuclear Power Plant using the Risk Oriented Accident Analysis Methodology (ROAAM). This report includes the definition of a probabilistic framework that decomposes the DCH problem into three probability density functions that reflect the most uncertain initial conditions (UO 2 mass, zirconium oxidation fraction, and steel mass). Uncertainties in the initial conditions are significant, but our quantification approach is based on establishing reasonable bounds that are not unnecessarily conservative. To this end, we also make use of the ROAAM ideas of enveloping scenarios and ''splintering.'' Two causal relations (CRs) are used in this framework: CR1 is a model that calculates the peak pressure in the containment as a function of the initial conditions, and CR2 is a model that returns the frequency of containment failure as a function of pressure within the containment. Uncertainty in CR1 is accounted for by the use of two independently developed phenomenological models, the Convection Limited Containment Heating (CLCH) model and the Two-Cell Equilibrium (TCE) model, and by probabilistically distributing the key parameter in both, which is the ratio of the melt entrainment time to the system blowdown time constant. The two phenomenological models have been compared with an extensive database including recent integral simulations at two different physical scales. The containment load distributions do not intersect the containment strength (fragility) curve in any significant way, resulting in containment failure probabilities less than 10 -3 for all scenarios considered. Sensitivity analyses did not show any areas of large sensitivity

  8. Use of probabilistic methods for estimating failure probabilities and directing ISI-efforts

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, F; Brickstad, B [University of Uppsala, (Switzerland)

    1988-12-31

    Some general aspects of the role of Non Destructive Testing (NDT) efforts on the resulting probability of core damage is discussed. A simple model for the estimation of the pipe break probability due to IGSCC is discussed. It is partly based on analytical procedures, partly on service experience from the Swedish BWR program. Estimates of the break probabilities indicate that further studies are urgently needed. It is found that the uncertainties about the initial crack configuration are large contributors to the total uncertainty. Some effects of the inservice inspection are studied and it is found that the detection probabilities influence the failure probabilities. (authors).

  9. Long-Term Fatigue and Its Probability of Failure Applied to Dental Implants

    Directory of Open Access Journals (Sweden)

    María Prados-Privado

    2016-01-01

    Full Text Available It is well known that dental implants have a high success rate but even so, there are a lot of factors that can cause dental implants failure. Fatigue is very sensitive to many variables involved in this phenomenon. This paper takes a close look at fatigue analysis and explains a new method to study fatigue from a probabilistic point of view, based on a cumulative damage model and probabilistic finite elements, with the goal of obtaining the expected life and the probability of failure. Two different dental implants were analysed. The model simulated a load of 178 N applied with an angle of 0°, 15°, and 20° and a force of 489 N with the same angles. Von Mises stress distribution was evaluated and once the methodology proposed here was used, the statistic of the fatigue life and the probability cumulative function were obtained. This function allows us to relate each cycle life with its probability of failure. Cylindrical implant has a worst behaviour under the same loading force compared to the conical implant analysed here. Methodology employed in the present study provides very accuracy results because all possible uncertainties have been taken in mind from the beginning.

  10. VISA-2, Reactor Vessel Failure Probability Under Thermal Shock

    International Nuclear Information System (INIS)

    Simonen, F.; Johnson, K.

    1992-01-01

    1 - Description of program or function: VISA2 (Vessel Integrity Simulation Analysis) was developed to estimate the failure probability of nuclear reactor pressure vessels under pressurized thermal shock conditions. The deterministic portion of the code performs heat transfer, stress, and fracture mechanics calculations for a vessel subjected to a user-specified temperature and pressure transient. The probabilistic analysis performs a Monte Carlo simulation to estimate the probability of vessel failure. Parameters such as initial crack size and position, copper and nickel content, fluence, and the fracture toughness values for crack initiation and arrest are treated as random variables. Linear elastic fracture mechanics methods are used to model crack initiation and growth. This includes cladding effects in the heat transfer, stress, and fracture mechanics calculations. The simulation procedure treats an entire vessel and recognizes that more than one flaw can exist in a given vessel. The flaw model allows random positioning of the flaw within the vessel wall thickness, and the user can specify either flaw length or length-to-depth aspect ratio for crack initiation and arrest predictions. The flaw size distribution can be adjust on the basis of different inservice inspection techniques and inspection conditions. The toughness simulation model includes a menu of alternative equations for predicting the shift in the reference temperature of the nil-ductility transition. 2 - Method of solution: The solution method uses closed form equations for temperatures, stresses, and stress intensity factors. A polynomial fitting procedure approximates the specified pressure and temperature transient. Failure probabilities are calculated by a Monte Carlo simulation. 3 - Restrictions on the complexity of the problem: Maxima of 30 welds. VISA2 models only the belt-line (cylindrical) region of a reactor vessel. The stresses are a function of the radial (through-wall) coordinate only

  11. A delay time model with imperfect and failure-inducing inspections

    International Nuclear Information System (INIS)

    Flage, Roger

    2014-01-01

    This paper presents an inspection-based maintenance optimisation model where the inspections are imperfect and potentially failure-inducing. The model is based on the basic delay-time model in which a system has three states: perfectly functioning, defective and failed. The system is deteriorating through these states and to reveal defective systems, inspections are performed periodically using a procedure by which the system fails with a fixed state-dependent probability; otherwise, an inspection identifies a functioning system as defective (false positive) with a fixed probability and a defective system as functioning (false negative) with a fixed probability. The system is correctively replaced upon failure or preventively replaced either at the N'th inspection time or when an inspection reveals the system as defective, whichever occurs first. Replacement durations are assumed to be negligible and costs are associated with inspections, replacements and failures. The problem is to determine the optimal inspection interval T and preventive age replacement limit N that jointly minimise the long run expected cost per unit of time. The system may also be thought of as a passive two-state system subject to random demands; the three states of the model are then functioning, undetected failed and detected failed; and to ensure the renewal property of replacement cycles the demand process generating the ‘delay time’ is then restricted to the Poisson process. The inspiration for the presented model has been passive safety critical valves as used in (offshore) oil and gas production and transportation systems. In light of this the passive system interpretation is highlighted, as well as the possibility that inspection-induced failures are associated with accidents. Two numerical examples are included, and some potential extensions of the model are indicated

  12. Personnel reliability impact on petrochemical facilities monitoring system's failure skipping probability

    Science.gov (United States)

    Kostyukov, V. N.; Naumenko, A. P.

    2017-08-01

    The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under

  13. Evolution of thermal stress and failure probability during reduction and re-oxidation of solid oxide fuel cell

    Science.gov (United States)

    Wang, Yu; Jiang, Wenchun; Luo, Yun; Zhang, Yucai; Tu, Shan-Tung

    2017-12-01

    The reduction and re-oxidation of anode have significant effects on the integrity of the solid oxide fuel cell (SOFC) sealed by the glass-ceramic (GC). The mechanical failure is mainly controlled by the stress distribution. Therefore, a three dimensional model of SOFC is established to investigate the stress evolution during the reduction and re-oxidation by finite element method (FEM) in this paper, and the failure probability is calculated using the Weibull method. The results demonstrate that the reduction of anode can decrease the thermal stresses and reduce the failure probability due to the volumetric contraction and porosity increasing. The re-oxidation can result in a remarkable increase of the thermal stresses, and the failure probabilities of anode, cathode, electrolyte and GC all increase to 1, which is mainly due to the large linear strain rather than the porosity decreasing. The cathode and electrolyte fail as soon as the linear strains are about 0.03% and 0.07%. Therefore, the re-oxidation should be controlled to ensure the integrity, and a lower re-oxidation temperature can decrease the stress and failure probability.

  14. The probability of containment failure by direct containment heating in zion

    International Nuclear Information System (INIS)

    Pilch, M.M.; Yan, H.; Theofanous, T.G.

    1994-01-01

    This report is the first step in the resolution of the Direct Containment Heating (DCH) issue for the Zion Nuclear Power Plant using the Risk Oriented Accident Analysis Methodology (ROAAM). This report includes the definition of a probabilistic framework that decomposes the DCH problem into three probability density functions that reflect the most uncertain initial conditions (UO 2 mass, zirconium oxidation fraction, and steel mass). Uncertainties in the initial conditions are significant, but the quantification approach is based on establishing reasonable bounds that are not unnecessarily conservative. To this end, the authors also make use of the ROAAM ideas of enveloping scenarios and open-quotes splinteringclose quotes. Two casual relations (CRs) are used in this framework: CR1 is a model that calculates the peak pressure in the containment as a function of the initial conditions, and CR2 is a model that returns the frequency of containment failure as a function of pressure within the containment. Uncertainty in CR1 is accounted for by the use of two independently developed phenomenological models, the Convection Limited Containment Heating (CLCH) model and the Two-Cell Equilibrium (TCE) model, and by probabilistically distributing the key parameter in both, which is the ratio of the melt entrainment time to the system blowdown time constant. The two phenomenological models have been compared with an extensive data base including recent integral simulations at two different physical scales (1/10th scale in the Surtsey facility at Sandia National Laboratories and 1/40th scale in the COREXIT facility at Argonne National Laboratory). The loads predicted by these models were significantly lower than those from previous parametric calculations. The containment load distributions do not intersect the containment strength curve in any significant way, resulting in containment failure probabilities less than 10 -3 for all scenarios considered

  15. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  16. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  17. Assessing changes in failure probability of dams in a changing climate

    Science.gov (United States)

    Mallakpour, I.; AghaKouchak, A.; Moftakhari, H.; Ragno, E.

    2017-12-01

    Dams are crucial infrastructures and provide resilience against hydrometeorological extremes (e.g., droughts and floods). In 2017, California experienced series of flooding events terminating a 5-year drought, and leading to incidents such as structural failure of Oroville Dam's spillway. Because of large socioeconomic repercussions of such incidents, it is of paramount importance to evaluate dam failure risks associated with projected shifts in the streamflow regime. This becomes even more important as the current procedures for design of hydraulic structures (e.g., dams, bridges, spillways) are based on the so-called stationary assumption. Yet, changes in climate are anticipated to result in changes in statistics of river flow (e.g., more extreme floods) and possibly increasing the failure probability of already aging dams. Here, we examine changes in discharge under two representative concentration pathways (RCPs): RCP4.5 and RCP8.5. In this study, we used routed daily streamflow data from ten global climate models (GCMs) in order to investigate possible climate-induced changes in streamflow in northern California. Our results show that while the average flow does not show a significant change, extreme floods are projected to increase in the future. Using the extreme value theory, we estimate changes in the return periods of 50-year and 100-year floods in the current and future climates. Finally, we use the historical and future return periods to quantify changes in failure probability of dams in a warming climate.

  18. Total time on test processes and applications to failure data analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Campo, R.

    1975-01-01

    This paper describes a new method for analyzing data. The method applies to non-negative observations such as times to failure of devices and survival times of biological organisms and involves a plot of the data. These plots are useful in choosing a probabilistic model to represent the failure behavior of the data. They also furnish information about the failure rate function and aid in its estimation. An important feature of these data plots is that incomplete data can be analyzed. The underlying random variables are, however, assumed to be independent and identically distributed. The plots have a theoretical basis, and converge to a transform of the underlying probability distribution as the sample size increases

  19. Bounds on survival probability given mean probability of failure per demand; and the paradoxical advantages of uncertainty

    International Nuclear Information System (INIS)

    Strigini, Lorenzo; Wright, David

    2014-01-01

    When deciding whether to accept into service a new safety-critical system, or choosing between alternative systems, uncertainty about the parameters that affect future failure probability may be a major problem. This uncertainty can be extreme if there is the possibility of unknown design errors (e.g. in software), or wide variation between nominally equivalent components. We study the effect of parameter uncertainty on future reliability (survival probability), for systems required to have low risk of even only one failure or accident over the long term (e.g. their whole operational lifetime) and characterised by a single reliability parameter (e.g. probability of failure per demand – pfd). A complete mathematical treatment requires stating a probability distribution for any parameter with uncertain value. This is hard, so calculations are often performed using point estimates, like the expected value. We investigate conditions under which such simplified descriptions yield reliability values that are sure to be pessimistic (or optimistic) bounds for a prediction based on the true distribution. Two important observations are (i) using the expected value of the reliability parameter as its true value guarantees a pessimistic estimate of reliability, a useful property in most safety-related decisions; (ii) with a given expected pfd, broader distributions (in a formally defined meaning of “broader”), that is, systems that are a priori “less predictable”, lower the risk of failures or accidents. Result (i) justifies the simplification of using a mean in reliability modelling; we discuss within which scope this justification applies, and explore related scenarios, e.g. how things improve if we can test the system before operation. Result (ii) not only offers more flexible ways of bounding reliability predictions, but also has important, often counter-intuitive implications for decision making in various areas, like selection of components, project management

  20. Advanced RESTART method for the estimation of the probability of failure of highly reliable hybrid dynamic systems

    International Nuclear Information System (INIS)

    Turati, Pietro; Pedroni, Nicola; Zio, Enrico

    2016-01-01

    The efficient estimation of system reliability characteristics is of paramount importance for many engineering applications. Real world system reliability modeling calls for the capability of treating systems that are: i) dynamic, ii) complex, iii) hybrid and iv) highly reliable. Advanced Monte Carlo (MC) methods offer a way to solve these types of problems, which are feasible according to the potentially high computational costs. In this paper, the REpetitive Simulation Trials After Reaching Thresholds (RESTART) method is employed, extending it to hybrid systems for the first time (to the authors’ knowledge). The estimation accuracy and precision of RESTART highly depend on the choice of the Importance Function (IF) indicating how close the system is to failure: in this respect, proper IFs are here originally proposed to improve the performance of RESTART for the analysis of hybrid systems. The resulting overall simulation approach is applied to estimate the probability of failure of the control system of a liquid hold-up tank and of a pump-valve subsystem subject to degradation induced by fatigue. The results are compared to those obtained by standard MC simulation and by RESTART with classical IFs available in the literature. The comparison shows the improvement in the performance obtained by our approach. - Highlights: • We consider the issue of estimating small failure probabilities in dynamic systems. • We employ the RESTART method to estimate the failure probabilities. • New Importance Functions (IFs) are introduced to increase the method performance. • We adopt two dynamic, hybrid, highly reliable systems as case studies. • A comparison with literature IFs proves the effectiveness of the new IFs.

  1. Research on Probability for Failures in VW Cars During Warranty and Post-Warranty Periods

    Directory of Open Access Journals (Sweden)

    Dainius Luneckas

    2014-12-01

    Full Text Available The present paper examines the distribution of failures in „Volkswagen“ car during warranty and post-warranty periods. A statistical mathematical model has been developed upon collecting distribution data on car failures. Considering mileage rates, probabilities for a failure in the systems, including suspension and transmission, cooling, electrical, etc., have been determined during warranty and expiration periods. The obtained results of the conducted research have been compared. The reached conclusions have been formulated and summarized.

  2. Probability of failure prediction for step-stress fatigue under sine or random stress

    Science.gov (United States)

    Lambert, R. G.

    1979-01-01

    A previously proposed cumulative fatigue damage law is extended to predict the probability of failure or fatigue life for structural materials with S-N fatigue curves represented as a scatterband of failure points. The proposed law applies to structures subjected to sinusoidal or random stresses and includes the effect of initial crack (i.e., flaw) sizes. The corrected cycle ratio damage function is shown to have physical significance.

  3. Fishnet model for failure probability tail of nacre-like imbricated lamellar materials

    Science.gov (United States)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying engineering applications, however, the failure probability of ≤10-6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^-6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.

  4. Next generation sequencing identifies abnormal Y chromosome and candidate causal variants in premature ovarian failure patients.

    Science.gov (United States)

    Lee, Yujung; Kim, Changshin; Park, YoungJoon; Pyun, Jung-A; Kwack, KyuBum

    2016-12-01

    Premature ovarian failure (POF) is characterized by heterogeneous genetic causes such as chromosomal abnormalities and variants in causal genes. Recently, development of techniques made next generation sequencing (NGS) possible to detect genome wide variants including chromosomal abnormalities. Among 37 Korean POF patients, XY karyotype with distal part deletions of Y chromosome, Yp11.32-31 and Yp12 end part, was observed in two patients through NGS. Six deleterious variants in POF genes were also detected which might explain the pathogenesis of POF with abnormalities in the sex chromosomes. Additionally, the two POF patients had no mutation in SRY but three non-synonymous variants were detected in genes regarding sex reversal. These findings suggest candidate causes of POF and sex reversal and show the propriety of NGS to approach the heterogeneous pathogenesis of POF. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Statistical study on applied stress dependence of failure time in stress corrosion cracking of Zircaloy-4 alloy

    International Nuclear Information System (INIS)

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi; Tanaka, Akiei.

    1988-01-01

    Effects of applied stress on failure time in stress corrosion cracking of Zircaloy-4 alloy were investigated by Weibull distribution method. Test pieces in the evaculated silica tubes were annealed at 1,073 K for 7.2 x 10 3 s, and then quenched into ice-water. These species under constant applied stresses of 40∼90 % yield stress were immersed in CH 3 OH-1 w% I 2 solution at room temperature. The probability distribution of failure times under applied stress of 40 % of yield stress was described as single Weibull distribution, which had one shape parameter. The probability distributions of failure times under applied stress above 60 % of yield stress were described as composite and mixed Weibull distributions, which had the two shape parameters of Weibull distributions for the regions of the shorter time and longer one of failure. The values of these shape parameters in this study were larger than the value of 1 which corresponded to that of wear out failure. The observation of fracture surfaces and the stress dependence of the shape parameters indicated that the shape parameters both for the times of failure under 40 % of yield stress and for the longer ones above 60 % of yield stress corresponded to intergranular cracking, and that for shorter times of failure corresponded to transgranular cracking and dimple fracture. (author)

  6. Probability analysis of WWER-1000 fuel elements behavior under steady-state, transient and accident conditions of reactor operation

    International Nuclear Information System (INIS)

    Tutnov, A.; Alexeev, E.

    2001-01-01

    'PULSAR-2' and 'PULSAR+' codes make it possible to simulate thermo-mechanical and thermo-physical parameters of WWER fuel elements. The probabilistic approach is used instead of traditional deterministic one to carry out a sensitive study of fuel element behavior under steady-state operation mode. Fuel elements initial parameters are given as a density of the probability distributions. Calculations are provided for all possible combinations of initial data as fuel-cladding gap, fuel density and gas pressure. Dividing values of these parameters to intervals final variants for calculations are obtained . Intervals of permissible fuel-cladding gap size have been divided to 10 equal parts, fuel density and gas pressure - to 5 parts. Probability of each variant realization is determined by multiplying the probabilities of separate parameters, because the tolerances of these parameters are distributed independently. Simulation results are turn out in the probabilistic bar charts. The charts present probability distribution of the changes in fuel outer diameter, hoop stress kinetics and fuel temperature versus irradiation time. A normative safety factor is introduced for control of any criterion realization and for determination of a reserve to the criteria failure. A probabilistic analysis of fuel element behavior under Reactivity Initiating Accident (RIA) is also performed and probability fuel element depressurization under hypothetical RIA is presented

  7. Failure probability assessment of wall-thinned nuclear pipes using probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    Lee, Sang-Min; Chang, Yoon-Suk; Choi, Jae-Boong; Kim, Young-Jin

    2006-01-01

    The integrity of nuclear piping system has to be maintained during operation. In order to maintain the integrity, reliable assessment procedures including fracture mechanics analysis, etc., are required. Up to now, this has been performed using conventional deterministic approaches even though there are many uncertainties to hinder a rational evaluation. In this respect, probabilistic approaches are considered as an appropriate method for piping system evaluation. The objectives of this paper are to estimate the failure probabilities of wall-thinned pipes in nuclear secondary systems and to propose limited operating conditions under different types of loadings. To do this, a probabilistic assessment program using reliability index and simulation techniques was developed and applied to evaluate failure probabilities of wall-thinned pipes subjected to internal pressure, bending moment and combined loading of them. The sensitivity analysis results as well as prototypal integrity assessment results showed a promising applicability of the probabilistic assessment program, necessity of practical evaluation reflecting combined loading condition and operation considering limited condition

  8. An empirical study on the human error recovery failure probability when using soft controls in NPP advanced MCRs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2014-01-01

    Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then

  9. The probability of containment failure by steam explosion in a PWR

    International Nuclear Information System (INIS)

    Briggs, A.J.

    1983-12-01

    The study of the risk associated with operation of a PWR includes assessment of severe accidents in which a combination of faults results in melting of the core. Probabilistic methods are used in such assessment, hence it is necessary to estimate the probability of key events. One such event is the occurrence of a large steam explosion when molten core debris slumps into the base of the reactor vessel. This report considers recent information, and recommends an upper limit to the range of probability values for containment failure by steam explosion for risk assessment for a plant such as the proposed Sizewell B station. (U.K.)

  10. Dependency models and probability of joint events

    International Nuclear Information System (INIS)

    Oerjasaeter, O.

    1982-08-01

    Probabilistic dependencies between components/systems are discussed with reference to a broad classification of potential failure mechanisms. Further, a generalized time-dependency model, based on conditional probabilities for estimation of the probability of joint events and event sequences is described. The applicability of this model is clarified/demonstrated by various examples. It is concluded that the described model of dependency is a useful tool for solving a variety of practical problems concerning the probability of joint events and event sequences where common cause and time-dependent failure mechanisms are involved. (Auth.)

  11. Calculation of the Incremental Conditional Core Damage Probability on the Extension of Allowed Outage Time

    International Nuclear Information System (INIS)

    Kang, Dae Il; Han, Sang Hoon

    2006-01-01

    RG 1.177 requires that the conditional risk (incremental conditional core damage probability and incremental conditional large early release probability: ICCDP and ICLERP), given that a specific component is out of service (OOS), be quantified for a permanent change of the allowed outage time (AOT) of a safety system. An AOT is the length of time that a particular component or system is permitted to be OOS while the plant is operating. The ICCDP is defined as: ICCDP = [(conditional CDF with the subject equipment OOS)- (baseline CDF with nominal expected equipment unavailabilities)] [duration of the single AOT under consideration]. Any event enabling the component OOS can initiate the time clock for the limiting condition of operation for a nuclear power plant. Thus, the largest ICCDP among the ICCDPs estimated from any occurrence of the basic events for the component fault tree should be selected for determining whether the AOT can be extended or not. If the component is under a preventive maintenance, the conditional risk can be straightforwardly calculated without changing the CCF probability. The main concern is the estimations of the CCF probability because there are the possibilities of the failures of other similar components due to the same root causes. The quantifications of the risk, given that a subject equipment is in a failed state, are performed by setting the identified event of subject equipment to TRUE. The CCF probabilities are also changed according to the identified failure cause. In the previous studies, however, the ICCDP was quantified with the consideration of the possibility of a simultaneous occurrence of two CCF events. Based on the above, we derived the formulas of the CCF probabilities for the cases where a specific component is in a failed state and we presented sample calculation results of the ICCDP for the low pressure safety injection system (LPSIS) of Ulchin Unit 3

  12. Mechanistic considerations used in the development of the probability of failure in transient increases in power (PROFIT) pellet-zircaloy cladding (thermo-mechanical-chemical) interactions (pci) fuel failure model

    International Nuclear Information System (INIS)

    Pankaskie, P.J.

    1980-05-01

    A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) interactions (PCI) failure model for estimating the Probability of Failure in Transient Increases in Power (PROFIT) was developed. PROFIT is based on (1) standard statistical methods applied to available PCI fuel failure data and (2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmental and strain-rate dependent Strain Energy Absorption to Failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-dislocation interaction effects in the Zircaloy cladding

  13. Differentiated protection services with failure probability guarantee for workflow-based applications

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2010-12-01

    A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.

  14. Failure probability estimate of type 304 stainless steel piping

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.; Mehta, H.S.; Ranganath, S.

    1989-01-01

    The primary source of in-service degradation of the SRS production reactor process water piping is intergranular stress corrosion cracking (IGSCC). IGSCC has occurred in a limited number of weld heat affected zones, areas known to be susceptible to IGSCC. A model has been developed to combine crack growth rates, crack size distributions, in-service examination reliability estimates and other considerations to estimate the pipe large-break frequency. This frequency estimates the probability that an IGSCC crack will initiate, escape detection by ultrasonic (UT) examination, and grow to instability prior to extending through-wall and being detected by the sensitive leak detection system. These events are combined as the product of four factors: (1) the probability that a given weld heat affected zone contains IGSCC; (2) the conditional probability, given the presence of IGSCC, that the cracking will escape detection during UT examination; (3) the conditional probability, given a crack escapes detection by UT, that it will not grow through-wall and be detected by leakage; (4) the conditional probability, given a crack is not detected by leakage, that it grows to instability prior to the next UT exam. These four factors estimate the occurrence of several conditions that must coexist in order for a crack to lead to a large break of the process water piping. When evaluated for the SRS production reactors, they produce an extremely low break frequency. The objective of this paper is to present the assumptions, methodology, results and conclusions of a probabilistic evaluation for the direct failure of the primary coolant piping resulting from normal operation and seismic loads. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break

  15. Reactor pressure vessel failure probability following through-wall cracks due to pressurized thermal shock events

    International Nuclear Information System (INIS)

    Simonen, F.A.; Garnich, M.R.; Simonen, E.P.; Bian, S.H.; Nomura, K.K.; Anderson, W.E.; Pedersen, L.T.

    1986-04-01

    A fracture mechanics model was developed at the Pacific Northwest Laboratory (PNL) to predict the behavior of a reactor pressure vessel following a through-wall crack that occurs during a pressurized thermal shock (PTS) event. This study, which contributed to a US Nuclear Regulatory Commission (NRC) program to study PTS risk, was coordinated with the Integrated Pressurized Thermal Shock (IPTS) Program at Oak Ridge National Laboratory (ORNL). The PNL fracture mechanics model uses the critical transients and probabilities of through-wall cracks from the IPTS Program. The PNL model predicts the arrest, reinitiation, and direction of crack growth for a postulated through-wall crack and thereby predicts the mode of vessel failure. A Monte-Carlo type of computer code was written to predict the probabilities of the alternative failure modes. This code treats the fracture mechanics properties of the various welds and plates of a vessel as random variables. Plant-specific calculations were performed for the Oconee-1, Calvert Cliffs-1, and H.B. Robinson-2 reactor pressure vessels for the conditions of postulated transients. The model predicted that 50% or more of the through-wall axial cracks will turn to follow a circumferential weld. The predicted failure mode is a complete circumferential fracture of the vessel, which results in a potential vertically directed missile consisting of the upper head assembly. Missile arrest calculations for the three nuclear plants predict that such vertical missiles, as well as all potential horizontally directed fragmentation type missiles, will be confined to the vessel enclosre cavity. The PNL failure mode model is recommended for use in future evaluations of other plants, to determine the failure modes that are most probable for postulated PTS events

  16. Search times and probability of detection in time-limited search

    Science.gov (United States)

    Wilson, David; Devitt, Nicole; Maurer, Tana

    2005-05-01

    When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.

  17. Quantum processes: probability fluxes, transition probabilities in unit time and vacuum vibrations

    International Nuclear Information System (INIS)

    Oleinik, V.P.; Arepjev, Ju D.

    1989-01-01

    Transition probabilities in unit time and probability fluxes are compared in studying the elementary quantum processes -the decay of a bound state under the action of time-varying and constant electric fields. It is shown that the difference between these quantities may be considerable, and so the use of transition probabilities W instead of probability fluxes Π, in calculating the particle fluxes, may lead to serious errors. The quantity W represents the rate of change with time of the population of the energy levels relating partly to the real states and partly to the virtual ones, and it cannot be directly measured in experiment. The vacuum background is shown to be continuously distorted when a perturbation acts on a system. Because of this the viewpoint of an observer on the physical properties of real particles continuously varies with time. This fact is not taken into consideration in the conventional theory of quantum transitions based on using the notion of probability amplitude. As a result, the probability amplitudes lose their physical meaning. All the physical information on quantum dynamics of a system is contained in the mean values of physical quantities. The existence of considerable differences between the quantities W and Π permits one in principle to make a choice of the correct theory of quantum transitions on the basis of experimental data. (author)

  18. The HYDROMED model and its application to semi-arid Mediterranean catchments with hill reservoirs 3: Reservoir storage capacity and probability of failure model

    Directory of Open Access Journals (Sweden)

    R. Ragab

    2001-01-01

    Full Text Available This paper addresses the issue of "what reservoir storage capacity is required to maintain a yield with a given probability of failure?". It is an important issue in terms of construction and cost. HYDROMED offers a solution based on the modified Gould probability matrix method. This method has the advantage of sampling all years data without reference to the sequence and is therefore particularly suitable for catchments with patchy data. In the HYDROMED model, the probability of failure is calculated on a monthly basis. The model has been applied to the El-Gouazine catchment in Tunisia using a long rainfall record from Kairouan together with the estimated Hortonian runoff, class A pan evaporation data and estimated abstraction data. Generally, the probability of failure differed from winter to summer. Generally, the probability of failure approaches zero when the reservoir capacity is 500,000 m3. The 25% probability of failure (75% success is achieved with a reservoir capacity of 58,000 m3 in June and 95,000 m3 in January. The probability of failure for a 240,000 m3 capacity reservoir (closer to storage capacity of El-Gouazine 233,000 m3, is approximately 5% in November, December and January, 3% in March, and 1.1% in May and June. Consequently there is no high risk of El-Gouazine being unable to meet its requirements at a capacity of 233,000 m3. Subsequently the benefit, in terms of probability of failure, by increasing the reservoir volume of El-Gouazine to greater than the 250,000 m3 is not high. This is important for the design engineers and the funding organizations. However, the analysis is based on the existing water abstraction policy, absence of siltation rate data and on the assumption that the present climate will prevail during the lifetime of the reservoir. Should these conditions change, a new analysis should be carried out. Keywords: HYDROMED, reservoir, storage capacity, probability of failure, Mediterranean

  19. A multifactorial likelihood model for MMR gene variant classification incorporating probabilities based on sequence bioinformatics and tumor characteristics: a report from the Colon Cancer Family Registry.

    Science.gov (United States)

    Thompson, Bryony A; Goldgar, David E; Paterson, Carol; Clendenning, Mark; Walters, Rhiannon; Arnold, Sven; Parsons, Michael T; Michael D, Walsh; Gallinger, Steven; Haile, Robert W; Hopper, John L; Jenkins, Mark A; Lemarchand, Loic; Lindor, Noralane M; Newcomb, Polly A; Thibodeau, Stephen N; Young, Joanne P; Buchanan, Daniel D; Tavtigian, Sean V; Spurdle, Amanda B

    2013-01-01

    Mismatch repair (MMR) gene sequence variants of uncertain clinical significance are often identified in suspected Lynch syndrome families, and this constitutes a challenge for both researchers and clinicians. Multifactorial likelihood model approaches provide a quantitative measure of MMR variant pathogenicity, but first require input of likelihood ratios (LRs) for different MMR variation-associated characteristics from appropriate, well-characterized reference datasets. Microsatellite instability (MSI) and somatic BRAF tumor data for unselected colorectal cancer probands of known pathogenic variant status were used to derive LRs for tumor characteristics using the Colon Cancer Family Registry (CFR) resource. These tumor LRs were combined with variant segregation within families, and estimates of prior probability of pathogenicity based on sequence conservation and position, to analyze 44 unclassified variants identified initially in Australasian Colon CFR families. In addition, in vitro splicing analyses were conducted on the subset of variants based on bioinformatic splicing predictions. The LR in favor of pathogenicity was estimated to be ~12-fold for a colorectal tumor with a BRAF mutation-negative MSI-H phenotype. For 31 of the 44 variants, the posterior probabilities of pathogenicity were such that altered clinical management would be indicated. Our findings provide a working multifactorial likelihood model for classification that carefully considers mode of ascertainment for gene testing. © 2012 Wiley Periodicals, Inc.

  20. Heart failure and sudden cardiac death in heritable thoracic aortic disease caused by pathogenic variants in the SMAD3 gene.

    Science.gov (United States)

    Backer, Julie De; Braverman, Alan C

    2018-05-01

    Predominant cardiovascular manifestations in the spectrum of Heritable Thoracic Aortic Disease include by default aortic root aneurysms- and dissections, which may be associated with aortic valve disease. Mitral- and tricuspid valve prolapse are other commonly recognized features. Myocardial disease, characterized by heart failure and/or malignant arrhythmias has been reported in humans and in animal models harboring pathogenic variants in the Fibrillin1 gene. Description of clinical history of three cases from one family in Ghent (Belgium) and one family in St. Louis (US). We report on three cases from two families presenting end-stage heart failure (in two) and lethal arrhythmias associated with moderate left ventricular dilatation (in one). All three cases harbor a pathogenic variant in the SMAD3 gene, known to cause aneurysm osteoarthritis syndrome, Loeys-Dietz syndrome type 3 or isolated Heritable Thoracic Aortic Disease. These unusual presentations warrant awareness for myocardial disease in patients harboring pathogenic variants in genes causing Heritable Thoracic Aortic Disease and indicate the need for prospective studies in larger cohorts. © 2018 The Authors. Molecular Genetics & Genomic Medicine published by Wiley Periodicals, Inc.

  1. A new approach for reliability analysis with time-variant performance characteristics

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2013-01-01

    Reliability represents safety level in industry practice and may variant due to time-variant operation condition and components deterioration throughout a product life-cycle. Thus, the capability to perform time-variant reliability analysis is of vital importance in practical engineering applications. This paper presents a new approach, referred to as nested extreme response surface (NERS), that can efficiently tackle time dependency issue in time-variant reliability analysis and enable to solve such problem by easily integrating with advanced time-independent tools. The key of the NERS approach is to build a nested response surface of time corresponding to the extreme value of the limit state function by employing Kriging model. To obtain the data for the Kriging model, the efficient global optimization technique is integrated with the NERS to extract the extreme time responses of the limit state function for any given system input. An adaptive response prediction and model maturation mechanism is developed based on mean square error (MSE) to concurrently improve the accuracy and computational efficiency of the proposed approach. With the nested response surface of time, the time-variant reliability analysis can be converted into the time-independent reliability analysis and existing advanced reliability analysis methods can be used. Three case studies are used to demonstrate the efficiency and accuracy of NERS approach

  2. Modeling tumor control probability for spatially inhomogeneous risk of failure based on clinical outcome data

    DEFF Research Database (Denmark)

    Lühr, Armin; Löck, Steffen; Jakobi, Annika

    2017-01-01

    PURPOSE: Objectives of this work are (1) to derive a general clinically relevant approach to model tumor control probability (TCP) for spatially variable risk of failure and (2) to demonstrate its applicability by estimating TCP for patients planned for photon and proton irradiation. METHODS AND ...

  3. Pharmacodynamic Impact of Carboxylesterase 1 Gene Variants in Patients with Congestive Heart Failure Treated with Angiotensin-Converting Enzyme Inhibitors

    DEFF Research Database (Denmark)

    Nelveg-Kristensen, Karl Emil; Bie, Peter; Ferrero, Laura

    2016-01-01

    BACKGROUND: Variation in the carboxylesterase 1 gene (CES1) may contribute to the efficacy of ACEIs. Accordingly, we examined the impact of CES1 variants on plasma angiotensin II (ATII)/angiotensin I (ATI) ratio in patients with congestive heart failure (CHF) that underwent ACEI dose titrations. ...

  4. Component failure data base of TRIGA reactors

    International Nuclear Information System (INIS)

    Djuricic, M.

    2004-10-01

    This compilation provides failure data such as first criticality, component type description (reactor component, population, cumulative calendar time, cumulative operating time, demands, failure mode, failures, failure rate, failure probability) and specific information on each type of component of TRIGA Mark-II reactors in Austria, Bangladesh, Germany, Finland, Indonesia, Italy, Indonesia, Slovenia and Romania. (nevyjel)

  5. MAI-free performance of PMU-OFDM transceiver in time-variant environment

    Science.gov (United States)

    Tadjpour, Layla; Tsai, Shang-Ho; Kuo, C.-C. J.

    2005-06-01

    An approximately multi-user OFDM transceiver was introduced to reduce the multi-access interference (MAI ) due to the carrier frequency offset (CFO) to a negligible amount via precoding by Tsai, Lin and Kuo. In this work, we investigate the performance of this precoded multi-user (PMU) OFDM system in a time-variant channel environment. We analyze and compare the MAI effect caused by time-variant channels in the PMU-OFDM and the OFDMA systems. Generally speaking, the MAI effect consists of two parts. The first part is due to the loss of orthogonality among subchannels for all users while the second part is due to the CFO effect caused by the Doppler shift. Simulation results show that, although OFDMA outperforms the PMU-OFDM transceiver in a fast time-variant environment without CFO, PMU-OFDM outperforms OFDMA in a slow time-variant channel via the use of M/2 symmetric or anti-symmetric codewords of M Hadamard-Walsh codes.

  6. FAILPROB-A Computer Program to Compute the Probability of Failure of a Brittle Component; TOPICAL

    International Nuclear Information System (INIS)

    WELLMAN, GERALD W.

    2002-01-01

    FAILPROB is a computer program that applies the Weibull statistics characteristic of brittle failure of a material along with the stress field resulting from a finite element analysis to determine the probability of failure of a component. FAILPROB uses the statistical techniques for fast fracture prediction (but not the coding) from the N.A.S.A. - CARES/life ceramic reliability package. FAILPROB provides the analyst at Sandia with a more convenient tool than CARES/life because it is designed to behave in the tradition of structural analysis post-processing software such as ALGEBRA, in which the standard finite element database format EXODUS II is both read and written. This maintains compatibility with the entire SEACAS suite of post-processing software. A new technique to deal with the high local stresses computed for structures with singularities such as glass-to-metal seals and ceramic-to-metal braze joints is proposed and implemented. This technique provides failure probability computation that is insensitive to the finite element mesh employed in the underlying stress analysis. Included in this report are a brief discussion of the computational algorithms employed, user instructions, and example problems that both demonstrate the operation of FAILPROB and provide a starting point for verification and validation

  7. Uncertainties in container failure time predictions

    International Nuclear Information System (INIS)

    Williford, R.E.

    1990-01-01

    Stochastic variations in the local chemical environment of a geologic waste repository can cause corresponding variations in container corrosion rates and failure times, and thus in radionuclide release rates. This paper addresses how well the future variations in repository chemistries must be known in order to predict container failure times that are bounded by a finite time period within the repository lifetime. Preliminary results indicate that a 5000 year scatter in predicted container failure times requires that repository chemistries be known to within ±10% over the repository lifetime. These are small uncertainties compared to current estimates. 9 refs., 3 figs

  8. Estimation of failure probability of the end induced current depending on uncertain parameters of a transmission line

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper treats about the risk analysis of an EMC default using a statistical approach based on reliability methods. A probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is computed by taking into account uncertainties on input parameters influencing extreme levels of interference in the context of transmission lines. Results are compared to Monte Carlo simulation (MCS). (authors)

  9. Reliability Analysis of Cooling Towers: Influence of Rebars Corrosion on Failure

    International Nuclear Information System (INIS)

    Sudret, Bruno; Pendola, Maurice

    2002-01-01

    Natural-draught cooling towers are used in nuclear power plants as heat exchangers. These structures are submitted to environmental loads such as wind and thermal gradients that are stochastic in nature. A probabilistic framework has been developed by EDF (Electricite de France) for assessing the durability of such structures. In this paper, the corrosion of the rebars due to concrete carbonation and the corresponding weakening of the reinforced concrete sections is considered. Due to the presence of time in the definition of the limit state function associated with the loss of serviceability of the cooling tower, time-variant reliability analysis has to be used. A novel approach is proposed to take into account the random 'initiation time', which corresponds to the time necessary for the carbonation to attain the rebars. Results are given in terms of the probability of failure of the structure over its life time. (authors)

  10. Probability of Accurate Heart Failure Diagnosis and the Implications for Hospital Readmissions.

    Science.gov (United States)

    Carey, Sandra A; Bass, Kyle; Saracino, Giovanna; East, Cara A; Felius, Joost; Grayburn, Paul A; Vallabhan, Ravi C; Hall, Shelley A

    2017-04-01

    Heart failure (HF) is a complex syndrome with inherent diagnostic challenges. We studied the scope of possibly inaccurately documented HF in a large health care system among patients assigned a primary diagnosis of HF at discharge. Through a retrospective record review and a classification schema developed from published guidelines, we assessed the probability of the documented HF diagnosis being accurate and determined factors associated with HF-related and non-HF-related hospital readmissions. An arbitration committee of 3 experts reviewed a subset of records to corroborate the results. We assigned a low probability of accurate diagnosis to 133 (19%) of the 712 patients. A subset of patients were also reviewed by an expert panel, which concluded that 13% to 35% of patients probably did not have HF (inter-rater agreement, kappa = 0.35). Low-probability HF was predictive of being readmitted more frequently for non-HF causes (p = 0.018), as well as documented arrhythmias (p = 0.023), and age >60 years (p = 0.006). Documented sleep apnea (p = 0.035), percutaneous coronary intervention (p = 0.006), non-white race (p = 0.047), and B-type natriuretic peptide >400 pg/ml (p = 0.007) were determined to be predictive of HF readmissions in this cohort. In conclusion, approximately 1 in 5 patients documented to have HF were found to have a low probability of actually having it. Moreover, the determination of low-probability HF was twice as likely to result in readmission for non-HF causes and, thus, should be considered a determinant for all-cause readmissions in this population. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Path probabilities of continuous time random walks

    International Nuclear Information System (INIS)

    Eule, Stephan; Friedrich, Rudolf

    2014-01-01

    Employing the path integral formulation of a broad class of anomalous diffusion processes, we derive the exact relations for the path probability densities of these processes. In particular, we obtain a closed analytical solution for the path probability distribution of a Continuous Time Random Walk (CTRW) process. This solution is given in terms of its waiting time distribution and short time propagator of the corresponding random walk as a solution of a Dyson equation. Applying our analytical solution we derive generalized Feynman–Kac formulae. (paper)

  12. Probability distribution of machining center failures

    International Nuclear Information System (INIS)

    Jia Yazhou; Wang Molin; Jia Zhixin

    1995-01-01

    Through field tracing research for 24 Chinese cutter-changeable CNC machine tools (machining centers) over a period of one year, a database of operation and maintenance for machining centers was built, the failure data was fitted to the Weibull distribution and the exponential distribution, the effectiveness was tested, and the failure distribution pattern of machining centers was found. Finally, the reliability characterizations for machining centers are proposed

  13. Burden of rare variants in ALS genes influences survival in familial and sporadic ALS.

    Science.gov (United States)

    Pang, Shirley Yin-Yu; Hsu, Jacob Shujui; Teo, Kay-Cheong; Li, Yan; Kung, Michelle H W; Cheah, Kathryn S E; Chan, Danny; Cheung, Kenneth M C; Li, Miaoxin; Sham, Pak-Chung; Ho, Shu-Leong

    2017-10-01

    Genetic variants are implicated in the development of amyotrophic lateral sclerosis (ALS), but it is unclear whether the burden of rare variants in ALS genes has an effect on survival. We performed whole genome sequencing on 8 familial ALS (FALS) patients with superoxide dismutase 1 (SOD1) mutation and whole exome sequencing on 46 sporadic ALS (SALS) patients living in Hong Kong and found that 67% had at least 1 rare variant in the exons of 40 ALS genes; 22% had 2 or more. Patients with 2 or more rare variants had lower probability of survival than patients with 0 or 1 variant (p = 0.001). After adjusting for other factors, each additional rare variant increased the risk of respiratory failure or death by 60% (p = 0.0098). The presence of the rare variant was associated with the risk of ALS (Odds ratio 1.91, 95% confidence interval 1.03-3.61, p = 0.03), and ALS patients had higher rare variant burden than controls (MB, p = 0.004). Our findings support an oligogenic basis with the burden of rare variants affecting the development and survival of ALS. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Corrosion induced failure analysis of subsea pipelines

    International Nuclear Information System (INIS)

    Yang, Yongsheng; Khan, Faisal; Thodi, Premkumar; Abbassi, Rouzbeh

    2017-01-01

    Pipeline corrosion is one of the main causes of subsea pipeline failure. It is necessary to monitor and analyze pipeline condition to effectively predict likely failure. This paper presents an approach to analyze the observed abnormal events to assess the condition of subsea pipelines. First, it focuses on establishing a systematic corrosion failure model by Bow-Tie (BT) analysis, and subsequently the BT model is mapped into a Bayesian Network (BN) model. The BN model facilitates the modelling of interdependency of identified corrosion causes, as well as the updating of failure probabilities depending on the arrival of new information. Furthermore, an Object-Oriented Bayesian Network (OOBN) has been developed to better structure the network and to provide an efficient updating algorithm. Based on this OOBN model, probability updating and probability adaptation are performed at regular intervals to estimate the failure probabilities due to corrosion and potential consequences. This results in an interval-based condition assessment of subsea pipeline subjected to corrosion. The estimated failure probabilities would help prioritize action to prevent and control failures. Practical application of the developed model is demonstrated using a case study. - Highlights: • A Bow-Tie (BT) based corrosion failure model linking causation with the potential losses. • A novel Object-Oriented Bayesian Network (OOBN) based corrosion failure risk model. • Probability of failure updating and adaptation with respect to time using OOBN model. • Application of the proposed model to develop and test strategies to minimize failure risk.

  15. Estimation of the common cause failure probabilities on the component group with mixed testing scheme

    International Nuclear Information System (INIS)

    Hwang, Meejeong; Kang, Dae Il

    2011-01-01

    Highlights: ► This paper presents a method to estimate the common cause failure probabilities on the common cause component group with mixed testing schemes. ► The CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing. ► There are many CCCGs with specific mixed testing schemes in real plant operation. ► Therefore, a general formula which is applicable to both alternate periodic testing scheme and train level mixed testing scheme was derived. - Abstract: This paper presents a method to estimate the common cause failure (CCF) probabilities on the common cause component group (CCCG) with mixed testing schemes such as the train level mixed testing scheme or the alternate periodic testing scheme. In the train level mixed testing scheme, the components are tested in a non-staggered way within the same train, but the components are tested in a staggered way between the trains. The alternate periodic testing scheme indicates that all components in the same CCCG are tested in a non-staggered way during the planned maintenance period, but they are tested in a staggered way during normal plant operation. Since the CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing, CCF estimators have two kinds of formulas in accordance with the testing schemes. Thus, there are general formulas to estimate the CCF probability on the staggered testing scheme and non-staggered testing scheme. However, in real plant operation, there are many CCCGs with specific mixed testing schemes. Recently, Barros () and Kang () proposed a CCF factor estimation method to reflect the alternate periodic testing scheme and the train level mixed testing scheme. In this paper, a general formula which is applicable to both the alternate periodic testing scheme and the train level mixed testing scheme was derived.

  16. Age replacement policy based on imperfect repair with random probability

    International Nuclear Information System (INIS)

    Lim, J.H.; Qu, Jian; Zuo, Ming J.

    2016-01-01

    In most of literatures of age replacement policy, failures before planned replacement age can be either minimally repaired or perfectly repaired based on the types of failures, cost for repairs and so on. In this paper, we propose age replacement policy based on imperfect repair with random probability. The proposed policy incorporates the case that such intermittent failure can be either minimally repaired or perfectly repaired with random probabilities. The mathematical formulas of the expected cost rate per unit time are derived for both the infinite-horizon case and the one-replacement-cycle case. For each case, we show that the optimal replacement age exists and is finite. - Highlights: • We propose a new age replacement policy with random probability of perfect repair. • We develop the expected cost per unit time. • We discuss the optimal age for replacement minimizing the expected cost rate.

  17. Uncertainty analysis of reactor safety systems with statistically correlated failure data

    International Nuclear Information System (INIS)

    Dezfuli, H.; Modarres, M.

    1985-01-01

    The probability of occurrence of the top event of a fault tree is estimated from failure probability of components that constitute the fault tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. Most fault tree evaluations have so far been based on uncorrelated component failure data. The subject of this paper is the description of a method of assessing the probability intervals for the top event failure probability of fault trees when component failure data are statistically correlated. To estimate the mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte-Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. A moment matching technique is used to obtain the probability distribution function of the top event through fitting a Johnson Ssub(B) distribution. The computer program (CORRELATE) was developed to perform the calculations necessary for the implementation of the method developed. The CORRELATE code is very efficient and consumes minimal computer time. This is primarily because it does not employ the time-consuming Monte-Carlo method. (author)

  18. Estimation of the common cause failure probabilities of the components under mixed testing schemes

    International Nuclear Information System (INIS)

    Kang, Dae Il; Hwang, Mee Jeong; Han, Sang Hoon

    2009-01-01

    For the case where trains or channels of standby safety systems consisting of more than two redundant components are tested in a staggered manner, the standby safety components within a train can be tested simultaneously or consecutively. In this case, mixed testing schemes, staggered and non-staggered testing schemes, are used for testing the components. Approximate formulas, based on the basic parameter method, were developed for the estimation of the common cause failure (CCF) probabilities of the components under mixed testing schemes. The developed formulas were applied to the four redundant check valves of the auxiliary feed water system as a demonstration study for their appropriateness. For a comparison, we estimated the CCF probabilities of the four redundant check valves for the mixed, staggered, and non-staggered testing schemes. The CCF probabilities of the four redundant check valves for the mixed testing schemes were estimated to be higher than those for the staggered testing scheme, and lower than those for the non-staggered testing scheme.

  19. Importance Sampling for Failure Probabilities in Computing and Data Transmission

    DEFF Research Database (Denmark)

    Asmussen, Søren

    We study efficient simulation algorithms for estimating P(Χ > χ), where Χ is the total time of a job with ideal time T that needs to be restarted after a failure. The main tool is importance sampling where one tries to identify a good importance distribution via an asymptotic description...... of the conditional distribution of T given Χ > χ. If T ≡ t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér type root  γ(t) is available. However, we also discuss  an algorithm avoiding the rootfinding. If T is random, particular attention...... is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different type of conditional limits occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using...

  20. Importance sampling for failure probabilities in computing and data transmission

    DEFF Research Database (Denmark)

    Asmussen, Søren

    2009-01-01

    In this paper we study efficient simulation algorithms for estimating P(X›x), where X is the total time of a job with ideal time $T$ that needs to be restarted after a failure. The main tool is importance sampling, where a good importance distribution is identified via an asymptotic description...... of the conditional distribution of T given X›x. If T≡t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér-type root, γ(t), is available. However, we also discuss an algorithm that avoids finding the root. If T is random, particular attention...... is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different types of conditional limit occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using...

  1. Estimation of Partial Safety Factors and Target Failure Probability Based on Cost Optimization of Rubble Mound Breakwaters

    DEFF Research Database (Denmark)

    Kim, Seung-Woo; Suh, Kyung-Duck; Burcharth, Hans F.

    2010-01-01

    The breakwaters are designed by considering the cost optimization because a human risk is seldom considered. Most breakwaters, however, were constructed without considering the cost optimization. In this study, the optimum return period, target failure probability and the partial safety factors...

  2. The First Report of a Patient with Probable Variant Creutzfeldt-Jakob Disease in Turkey

    Directory of Open Access Journals (Sweden)

    Demet Özbabalık Adapınar

    2011-12-01

    Full Text Available Variant Creutzfeldt-Jakob disease (vCJD was first reported in the UK in 1996. Here, we report the first Turkish case of vCJD. A 47-year-old man, who has never lived outside of Turkey and had had no transfusion, was admitted to the University Hospital with speech disorder, cognitive decline and ataxia following depression, irritability, and personality change. The immunoassay of the 14-3-3 protein in the cerebrospinal fluid was negative. Brain magnetic resonance imaging revealed high-signal lesions involving the bilateral caudate and lentiform nucleus on T2- and diffusion-weighted imaging. The patient developed akinetic mutism 10 months after disease onset. The clinical presentation and neuroimaging findings were compatible with the vCJD cases reported since 1996 and met the World Health Organization’s case definition for probable vCJD.

  3. Failure analysis of real-time systems

    International Nuclear Information System (INIS)

    Jalashgar, A.; Stoelen, K.

    1998-01-01

    This paper highlights essential aspects of real-time software systems that are strongly related to the failures and their course of propagation. The significant influence of means-oriented and goal-oriented system views in the description, understanding and analysing of those aspects is elaborated. The importance of performing failure analysis prior to reliability analysis of real-time systems is equally addressed. Problems of software reliability growth models taking the properties of such systems into account are discussed. Finally, the paper presents a preliminary study of a goal-oriented approach to model the static and dynamic characteristics of real-time systems, so that the corresponding analysis can be based on a more descriptive and informative picture of failures, their effects and the possibility of their occurrence. (author)

  4. Flexural strength and the probability of failure of cold isostatic pressed zirconia core ceramics.

    Science.gov (United States)

    Siarampi, Eleni; Kontonasaki, Eleana; Papadopoulou, Lambrini; Kantiranis, Nikolaos; Zorba, Triantafillia; Paraskevopoulos, Konstantinos M; Koidis, Petros

    2012-08-01

    The flexural strength of zirconia core ceramics must predictably withstand the high stresses developed during oral function. The in-depth interpretation of strength parameters and the probability of failure during clinical performance could assist the clinician in selecting the optimum materials while planning treatment. The purpose of this study was to evaluate the flexural strength based on survival probability and Weibull statistical analysis of 2 zirconia cores for ceramic restorations. Twenty bar-shaped specimens were milled from 2 core ceramics, IPS e.max ZirCAD and Wieland ZENO Zr, and were loaded until fracture according to ISO 6872 (3-point bending test). An independent samples t test was used to assess significant differences of fracture strength (α=.05). Weibull statistical analysis of the flexural strength data provided 2 parameter estimates: Weibull modulus (m) and characteristic strength (σ(0)). The fractured surfaces of the specimens were evaluated by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). The investigation of the crystallographic state of the materials was performed with x-ray diffraction analysis (XRD) and Fourier transform infrared (FTIR) spectroscopy. Higher mean flexural strength (Plines zones). Both groups primarily sustained the tetragonal phase of zirconia and a negligible amount of the monoclinic phase. Although both zirconia ceramics presented similar fractographic and crystallographic properties, the higher flexural strength of WZ ceramics was associated with a lower m and more voids in their microstructure. These findings suggest a greater scattering of strength values and a flaw distribution that are expected to increase failure probability. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  5. Time-variant flexural reliability of RC beams with externally bonded CFRP under combined fatigue-corrosion actions

    International Nuclear Information System (INIS)

    Bigaud, David; Ali, Osama

    2014-01-01

    Time-variant reliability analysis of RC highway bridges strengthened with carbon fibre reinforced polymer CFRP laminates under four possible competing damage modes (concrete crushing, steel rupture after yielding, CFRP rupture and FRP plate debonding) and three degradation factors is analyzed in terms of reliability index β using FORM. The first degradation factor is chloride-attack corrosion which induces reduction in steel area and concrete cover cracking at characteristic key times (corrosion initiation, severe surface cover cracking). The second degradation factor considered is fatigue which leads to damage in concrete and steel rebar. Interaction between corrosion and fatigue crack growth in steel reinforcing bars is implemented. The third degradation phenomenon is the CFRP properties deterioration due to aging. Considering these three degradation factors, the time-dependent flexural reliability profile of a typical simple 15 m-span intermediate girder of a RC highway bridge is constructed under various traffic volumes and under different corrosion environments. The bridge design options follow AASHTO-LRFD specifications. Results of the study have shown that the reliability is very sensitive to factors governing the corrosion. Concrete damage due to fatigue slightly affects reliability profile of non-strengthened section, while service life after strengthening is strongly related to fatigue damage in concrete. - Highlights: • We propose a method to follow the time-variant reliability of strengthened RC beams. • We consider multiple competing failure modes of CFRP strengthened RC beams. • We consider combined degradation mechanisms (corrosion, fatigue, ageing of CFRP)

  6. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  7. Retrieval system for emplaced spent unreprocessed fuel (SURF) in salt bed depository: accident event analysis and mechanical failure probabilities. Final report

    International Nuclear Information System (INIS)

    Bhaskaran, G.; McCleery, J.E.

    1979-10-01

    This report provides support in developing an accident prediction event tree diagram, with an analysis of the baseline design concept for the retrieval of emplaced spent unreprocessed fuel (SURF) contained in a degraded Canister. The report contains an evaluation check list, accident logic diagrams, accident event tables, fault trees/event trees and discussions of failure probabilities for the following subsystems as potential contributors to a failure: (a) Canister extraction, including the core and ram units; (b) Canister transfer at the hoist area; and (c) Canister hoisting. This report is the second volume of a series. It continues and expands upon the report Retrieval System for Emplaced Spent Unreprocessed Fuel (SURF) in Salt Bed Depository: Baseline Concept Criteria Specifications and Mechanical Failure Probabilities. This report draws upon the baseline conceptual specifications contained in the first report

  8. Temperature Analysis and Failure Probability of the Fuel Element in HTR-PM

    International Nuclear Information System (INIS)

    Yang Lin; Liu Bing; Tang Chunhe

    2014-01-01

    Spherical fuel element is applied in the 200-MW High Temperature Reactor-Pebble-bed Modular (HTR-PM). Each spherical fuel element contains approximately 12,000 coated fuel particles in the inner graphite matrix with a diameter of 50mm to form the fuel zone, while the outer shell with a thickness of 5mm is a fuel-free zone made up of the same graphite material. Under high burnup irradiation, the temperature of fuel element rises and the stress will result in the damage of fuel element. The purpose of this study is to analyze the temperature of fuel element and to discuss the stress and failure probability. (author)

  9. VALIDATION OF SPRING OPERATED PRESSURE RELIEF VALVE TIME TO FAILURE AND THE IMPORTANCE OF STATISTICALLY SUPPORTED MAINTENANCE INTERVALS

    Energy Technology Data Exchange (ETDEWEB)

    Gross, R; Stephen Harris, S

    2009-02-18

    The Savannah River Site operates a Relief Valve Repair Shop certified by the National Board of Pressure Vessel Inspectors to NB-23, The National Board Inspection Code. Local maintenance forces perform inspection, testing, and repair of approximately 1200 spring-operated relief valves (SORV) each year as the valves are cycled in from the field. The Site now has over 7000 certified test records in the Computerized Maintenance Management System (CMMS); a summary of that data is presented in this paper. In previous papers, several statistical techniques were used to investigate failure on demand and failure rates including a quantal response method for predicting the failure probability as a function of time in service. The non-conservative failure mode for SORV is commonly termed 'stuck shut'; industry defined as the valve opening at greater than or equal to 1.5 times the cold set pressure. Actual time to failure is typically not known, only that failure occurred some time since the last proof test (censored data). This paper attempts to validate the assumptions underlying the statistical lifetime prediction results using Monte Carlo simulation. It employs an aging model for lift pressure as a function of set pressure, valve manufacturer, and a time-related aging effect. This paper attempts to answer two questions: (1) what is the predicted failure rate over the chosen maintenance/ inspection interval; and do we understand aging sufficient enough to estimate risk when basing proof test intervals on proof test results?

  10. Differential subsidence and its effect on subsurface infrastructure: predicting probability of pipeline failure (STOOP project)

    Science.gov (United States)

    de Bruijn, Renée; Dabekaussen, Willem; Hijma, Marc; Wiersma, Ane; Abspoel-Bukman, Linda; Boeije, Remco; Courage, Wim; van der Geest, Johan; Hamburg, Marc; Harmsma, Edwin; Helmholt, Kristian; van den Heuvel, Frank; Kruse, Henk; Langius, Erik; Lazovik, Elena

    2017-04-01

    Due to heterogeneity of the subsurface in the delta environment of the Netherlands, differential subsidence over short distances results in tension and subsequent wear of subsurface infrastructure, such as water and gas pipelines. Due to uncertainties in the build-up of the subsurface, however, it is unknown where this problem is the most prominent. This is a problem for asset managers deciding when a pipeline needs replacement: damaged pipelines endanger security of supply and pose a significant threat to safety, yet premature replacement raises needless expenses. In both cases, costs - financial or other - are high. Therefore, an interdisciplinary research team of geotechnicians, geologists and Big Data engineers from research institutes TNO, Deltares and SkyGeo developed a stochastic model to predict differential subsidence and the probability of consequent pipeline failure on a (sub-)street level. In this project pipeline data from company databases is combined with a stochastic geological model and information on (historical) groundwater levels and overburden material. Probability of pipeline failure is modelled by a coupling with a subsidence model and two separate models on pipeline behaviour under stress, using a probabilistic approach. The total length of pipelines (approx. 200.000 km operational in the Netherlands) and the complexity of the model chain that is needed to calculate a chance of failure, results in large computational challenges, as it requires massive evaluation of possible scenarios to reach the required level of confidence. To cope with this, a scalable computational infrastructure has been developed, composing a model workflow in which components have a heterogeneous technological basis. Three pilot areas covering an urban, a rural and a mixed environment, characterised by different groundwater-management strategies and different overburden histories, are used to evaluate the differences in subsidence and uncertainties that come with

  11. Evolution of simeprevir-resistant variants over time by ultra-deep sequencing in HCV genotype 1b.

    Science.gov (United States)

    Akuta, Norio; Suzuki, Fumitaka; Sezaki, Hitomi; Suzuki, Yoshiyuki; Hosaka, Tetsuya; Kobayashi, Masahiro; Kobayashi, Mariko; Saitoh, Satoshi; Ikeda, Kenji; Kumada, Hiromitsu

    2014-08-01

    Using ultra-deep sequencing technology, the present study was designed to investigate the evolution of simeprevir-resistant variants (amino acid substitutions of aa80, aa155, aa156, and aa168 positions in HCV NS3 region) over time. In Toranomon Hospital, 18 Japanese patients infected with HCV genotype 1b, received triple therapy of simeprevir/PEG-IFN/ribavirin (DRAGON or CONCERT study). Sustained virological response rate was 67%, and that was significantly higher in patients with IL28B rs8099917 TT than in those with non-TT. Six patients, who did not achieve sustained virological response, were tested for resistant variants by ultra-deep sequencing, at the baseline, at the time of re-elevation of viral loads, and at 96 weeks after the completion of treatment. Twelve of 18 resistant variants, detected at re-elevation of viral load, were de novo resistant variants. Ten of 12 de novo resistant variants become undetectable over time, and that five of seven resistant variants, detected at baseline, persisted over time. In one patient, variants of Q80R at baseline (0.3%) increased at 96-week after the cessation of treatment (10.2%), and de novo resistant variants of D168E (0.3%) also increased at 96-week after the cessation of treatment (9.7%). In conclusion, the present study indicates that the emergence of simeprevir-resistant variants after the start of treatment could not be predicted at baseline, and the majority of de novo resistant variants become undetectable over time. Further large-scale prospective studies should be performed to investigate the clinical utility in detecting simeprevir-resistant variants. © 2014 Wiley Periodicals, Inc.

  12. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  13. The relative impact of sizing errors on steam generator tube failure probability

    International Nuclear Information System (INIS)

    Cizelj, L.; Dvorsek, T.

    1998-01-01

    The Outside Diameter Stress Corrosion Cracking (ODSCC) at tube support plates is currently the major degradation mechanism affecting the steam generator tubes made of Inconel 600. This caused development and licensing of degradation specific maintenance approaches, which addressed two main failure modes of the degraded piping: tube rupture; and excessive leakage through degraded tubes. A methodology aiming at assessing the efficiency of a given set of possible maintenance approaches has already been proposed by the authors. It pointed out better performance of the degradation specific over generic approaches in (1) lower probability of single and multiple steam generator tube rupture (SGTR), (2) lower estimated accidental leak rates and (3) less tubes plugged. A sensitivity analysis was also performed pointing out the relative contributions of uncertain input parameters to the tube rupture probabilities. The dominant contribution was assigned to the uncertainties inherent to the regression models used to correlate the defect size and tube burst pressure. The uncertainties, which can be estimated from the in-service inspections, are further analysed in this paper. The defect growth was found to have significant and to some extent unrealistic impact on the probability of single tube rupture. Since the defect growth estimates were based on the past inspection records they strongly depend on the sizing errors. Therefore, an attempt was made to filter out the sizing errors and to arrive at more realistic estimates of the defect growth. The impact of different assumptions regarding sizing errors on the tube rupture probability was studied using a realistic numerical example. The data used is obtained from a series of inspection results from Krsko NPP with 2 Westinghouse D-4 steam generators. The results obtained are considered useful in safety assessment and maintenance of affected steam generators. (author)

  14. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  15. Failures probability calculation of the energy supply of the Angra-1 reactor rods assembly

    International Nuclear Information System (INIS)

    Borba, P.R.

    1978-01-01

    This work analyses the electric power system of the Angra I PWR plant. It is demonstrated that this system is closely coupled with the safety engineering features, which are the equipments provided to prevent, limit, or mitigate the release of radioactive material and to permit the safe reactor shutdown. Event trees are used to analyse the operation of those systems which can lead to the release of radioactivity following a specified initial event. The fault trees technique is used to calculate the failure probability of the on-site electric power system [pt

  16. A Rare Case of Acute Renal Failure Secondary to Rhabdomyolysis Probably Induced by Donepezil

    Directory of Open Access Journals (Sweden)

    Osman Zikrullah Sahin

    2014-01-01

    Full Text Available Introduction. Acute renal failure (ARF develops in 33% of the patients with rhabdomyolysis. The main etiologic factors are alcoholism, trauma, exercise overexertion, and drugs. In this report we present a rare case of ARF secondary to probably donepezil-induced rhabdomyolysis. Case Presentation. An 84-year-old male patient was admitted to the emergency department with a complaint of generalized weakness and reduced consciousness for two days. He had a history of Alzheimer’s disease for one year and he had taken donepezil 5 mg daily for two months. The patient’s physical examination revealed apathy, loss of cooperation, and decreased muscle strength. Laboratory studies revealed the following: urea: 128 mg/dL; Creatinine 6.06 mg/dL; creatine kinase: 3613 mg/dL. Donepezil was discontinued and the patient’s renal function tests improved gradually. Conclusion. Rhabdomyolysis-induced acute renal failure may develop secondary to donepezil therapy.

  17. Application of nonhomogeneous Poisson process to reliability analysis of repairable systems of a nuclear power plant with rates of occurrence of failures time-dependent

    International Nuclear Information System (INIS)

    Saldanha, Pedro L.C.; Simone, Elaine A. de; Melo, Paulo Fernando F.F. e

    1996-01-01

    Aging is used to mean the continuous process which physical characteristics of a system, a structure or an equipment changes with time or use. Their effects are increases in failure probabilities of a system, a structure or an equipment, and their are calculated using time-dependent failure rate models. The purpose of this paper is to present an application of the nonhomogeneous Poisson process as a model to study rates of occurrence of failures when they are time-dependent. To this application, an analysis of reliability of service water pumps of a typical nuclear power plant is made, as long as the pumps are effectively repaired components. (author)

  18. [Predictors factors for the extubation failure in two or more times among preterm newborn].

    Science.gov (United States)

    Tapia-Rombo, Carlos Antonio; De León-Gómez, Noé; Ballesteros-Del-Olmo, Julio César; Ruelas-Vargas, Consuelo; Cuevas-Urióstegui, María Luisa; Castillo-Pérez, José Juan

    2010-01-01

    With the ventilatory mechanical attendance has been prolonged the life of the preterm newborn (PTNB) critically sick and during that lapse many occasions it is necessary reintubation to PTNB in two or more times with the subsequent damage that makes enter to the patient to a vicious circle with more damage during the same reintubated. The objective of this study was to determine the factors that predict the extubation failure among PTNB from 28 to 36 weeks of gestational age in two or more times. It was considered extubation failure when in the first 72 hours of being had extubated the patient; there was reintubation necessity, independent of the cause that originated it. For the second extubation or more took the same approach. During the period of September to December of the 2004 were included in retrospective study to all PTNB that were interned in one hospital of third level that fulfilled the inclusion approaches (one study published where we take account the first extubation failure) and in retrolective study to the patients of the same hospital of January to October of the 2006. They were formed two groups, group A of cases (who failed in extubation two or more times) and the B of controls (who failed in extubation for the first time). The descriptive statistic and the inferential through of Student t test or Mann-Whitney U or rank sum test Wilcoxon, in suitable case; Chi-square or Fisher's exact test was used. Odds ratio (OR) and multivariate analysis for to study predictors factors for the extubation failure was employed. Statistical significance was considered at p 2, OR 5.3, IC to 95% of 1.3-21.4 (P = 0.02). In the bronchoscopy study they were some anatomical alterations that they explained the extubation failure in the second time. We conclude that it is important to plan an extubation in the PTNB, when there has already been a previous failure, and to avoid the well-known predictors factors for extubation failure as much as possible in the extubation

  19. Time-dependent fracture probability of bilayer, lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation

    Science.gov (United States)

    Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine

    2013-01-01

    Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349

  20. Time-dependent fracture probability of bilayer, lithium-disilicate-based, glass-ceramic, molar crowns as a function of core/veneer thickness ratio and load orientation.

    Science.gov (United States)

    Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F

    2013-11-01

    Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.

  1. 201Tl uptake in variant angina: probable demonstration of myocardial reactive hyperemia in man

    International Nuclear Information System (INIS)

    Kronenberg, M.W.; Robertson, R.M.; Born, M.L.; Steckley, R.A.; Robertson, D.; Friesinger, G.C.

    1982-01-01

    Myocardial thallium scintigraphy was performed in four subjects with variant angina and in one subject with isolated, fixed coronary obstruction. Three subjects with variant angina had short episodes of ischemic ST-segment elevation that lasted 20--100 seconds. Thallium scintigrams demonstrated excess uptake in regions judged to be ischemic by angiographic and electrocardiographic criteria. Two subjects, one with variant angina and the other with a fixed coronary lesion, had prolonged episodes of ischemia that lasted 390--900 seconds. Both had reduced thallium uptake in the ischemic regions. We conclude that myocardial reactive hyperemia is the cause of excess thallium uptake in patients with variant angina who have short episodes of myocardial ischemia

  2. Stationary Probability and First-Passage Time of Biased Random Walk

    International Nuclear Information System (INIS)

    Li Jing-Wen; Tang Shen-Li; Xu Xin-Ping

    2016-01-01

    In this paper, we consider the stationary probability and first-passage time of biased random walk on 1D chain, where at each step the walker moves to the left and right with probabilities p and q respectively (0 ⩽ p, q ⩽ 1, p + q = 1). We derive exact analytical results for the stationary probability and first-passage time as a function of p and q for the first time. Our results suggest that the first-passage time shows a double power-law F ∼ (N − 1) γ , where the exponent γ = 2 for N < |p − q| −1 and γ = 1 for N > |p − q| −1 . Our study sheds useful insights into the biased random-walk process. (paper)

  3. Statin Treatment and Clinical Outcomes of Heart Failure Among Africans: An Inverse Probability Treatment Weighted Analysis.

    Science.gov (United States)

    Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha

    2017-04-01

    Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  4. Definition of containment failure

    International Nuclear Information System (INIS)

    Cybulskis, P.

    1982-01-01

    Core meltdown accidents of the types considered in probabilistic risk assessments (PRA's) have been predicted to lead to pressures that will challenge the integrity of containment structures. Review of a number of PRA's indicates considerable variation in the predicted probability of containment failure as a function of pressure. Since the results of PRA's are sensitive to the prediction of the occurrence and the timing of containment failure, better understanding of realistic containment capabilities and a more consistent approach to the definition of containment failure pressures are required. Additionally, since the size and location of the failure can also significantly influence the prediction of reactor accident risk, further understanding of likely failure modes is required. The thresholds and modes of containment failure may not be independent

  5. Reliability modelling for wear out failure period of a single unit system

    OpenAIRE

    Arekar, Kirti; Ailawadi, Satish; Jain, Rinku

    2012-01-01

    The present paper deals with two time-shifted density models for wear out failure period of a single unit system. The study, considered the time-shifted Gamma and Normal distributions. Wear out failures occur as a result of deterioration processes or mechanical wear and its probability of occurrence increases with time. A failure rate as a function of time deceases in an early failure period and it increases in wear out period. Failure rates for time shifted distributions and expression for m...

  6. The extinction probability in systems randomly varying in time

    Directory of Open Access Journals (Sweden)

    Imre Pázsit

    2017-09-01

    Full Text Available The extinction probability of a branching process (a neutron chain in a multiplying medium is calculated for a system randomly varying in time. The evolution of the first two moments of such a process was calculated previously by the authors in a system randomly shifting between two states of different multiplication properties. The same model is used here for the investigation of the extinction probability. It is seen that the determination of the extinction probability is significantly more complicated than that of the moments, and it can only be achieved by pure numerical methods. The numerical results indicate that for systems fluctuating between two subcritical or two supercritical states, the extinction probability behaves as expected, but for systems fluctuating between a supercritical and a subcritical state, there is a crucial and unexpected deviation from the predicted behaviour. The results bear some significance not only for neutron chains in a multiplying medium, but also for the evolution of biological populations in a time-varying environment.

  7. Prediction of dynamic expected time to system failure

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Lee, Chong Chul [Korea Nuclear Fuel Co., Ltd., Taejon (Korea, Republic of)

    1998-12-31

    The mean time to failure (MTTF) expressing the mean value of the system life is a measure of system effectiveness. To estimate the remaining life of component and/or system, the dynamic mean time to failure concept is suggested. It is the time-dependent property depending on the status of components. The Kalman filter is used to estimate the reliability of components using the on-line information (directly measured sensor output or device-specific diagnostics in the intelligent sensor) in form of the numerical value (state factor). This factor considers the persistency of the fault condition and confidence level in measurement. If there is a complex system with many components, each calculated reliability`s of components are combined, which results in the dynamic MTTF of system. The illustrative examples are discussed. The results show that the dynamic MTTF can well express the component and system failure behaviour whether any kinds of failure are occurred or not. 9 refs., 6 figs. (Author)

  8. Prediction of dynamic expected time to system failure

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Lee, Chong Chul [Korea Nuclear Fuel Co., Ltd., Taejon (Korea, Republic of)

    1997-12-31

    The mean time to failure (MTTF) expressing the mean value of the system life is a measure of system effectiveness. To estimate the remaining life of component and/or system, the dynamic mean time to failure concept is suggested. It is the time-dependent property depending on the status of components. The Kalman filter is used to estimate the reliability of components using the on-line information (directly measured sensor output or device-specific diagnostics in the intelligent sensor) in form of the numerical value (state factor). This factor considers the persistency of the fault condition and confidence level in measurement. If there is a complex system with many components, each calculated reliability`s of components are combined, which results in the dynamic MTTF of system. The illustrative examples are discussed. The results show that the dynamic MTTF can well express the component and system failure behaviour whether any kinds of failure are occurred or not. 9 refs., 6 figs. (Author)

  9. ANALYSIS OF RELIABILITY OF NONRECTORABLE REDUNDANT POWER SYSTEMS TAKING INTO ACCOUNT COMMON FAILURES

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2014-01-01

    Full Text Available Reliability Analysis of nonrestorable redundant power Systems of industrial plants and other consumers of electric energy was carried out. The main attention was paid to numbers failures influence, caused by failures of all elements of System due to one general reason. Noted the main possible reasons of common failures formation. Two main indicators of reliability of non-restorable systems are considered: average time of no-failure operation and mean probability of no-failure operation. Modeling of failures were carried out by mean of division of investigated system into two in-series connected subsystems, one of them indicated independent failures, but the other indicated common failures. Due to joined modeling of single and common failures resulting intensity of failures is the amount incompatible components: intensity statistically independent failures and intensity of common failures of elements and system in total.It is shown the influence of common failures of elements on average time of no-failure operation of system. There is built the scale of preference of systems according to criterion of  average time maximum of no-failure operation, depending on portion of common failures. It is noticed that such common failures don’t influence on the scale of preference, but  change intervals of time, determining the moments of systems failures and excepting them from the number of comparators. There were discussed two problems  of conditionally optimization of  systems’  reservation choice, taking into account their reliability and cost. The first problem is solved due to criterion of minimum cost of system providing mean probability of no-failure operation, the second problem is solved due to criterion of maximum of mean probability of no-failure operation with cost limitation of system.

  10. Reliability analysis for the creep rupture mode of failure

    International Nuclear Information System (INIS)

    Vaidyanathan, S.

    1975-01-01

    An analytical study has been carried out to relate the factors of safety employed in the design of a component to the probability of failure in the thermal creep rupture mode. The analysis considers the statistical variations in the operating temperature, stress and rupture time, and applies the life fraction damage criterion as the indicator of failure. Typical results for solution annealed type 304-stainless steel material for the temperature and stress variations expected in an LMFBR environment have been obtained. The analytical problem was solved by considering the joint distribution of the independent variables and deriving the distribution for the function associated with the probability of failure by integrating over proper regions as dictated by the deterministic design rule. This leads to a triple integral for the final probability of failure where the coefficients of variation associated with the temperature, stress and rupture time distributions can be specified by the user. The derivation is general, and can be used for time varying stress histories and the case of irradiated material where the rupture time varies with accumulated fluence. Example calculations applied to solution annealed type 304 stainless steel material have been carried out for an assumed coefficient of variation of 2% for temperature and 6% for stress. The results show that the probability of failure associated with dependent stress intensity limits specified in the ASME Boiler and Pressure Vessel Section III Code Case 1592 is less than 5x10 -8 . Rupture under thermal creep conditions is a highly complicated phenomenon. It is believed that the present study will help in quantizing the reliability to be expected with deterministic design factors of safety

  11. Resolving epidemic network failures through differentiated repair times

    DEFF Research Database (Denmark)

    Fagertun, Anna Manolova; Ruepp, Sarah Renée; Manzano, Marc

    2015-01-01

    In this study, the authors investigate epidemic failure spreading in large-scale transport networks under generalisedmulti-protocol label switching control plane. By evaluating the effect of the epidemic failure spreading on the network,they design several strategies for cost-effective network pe...... assigninglower repair times among the network nodes. They believe that the event-driven simulation model can be highly beneficialfor network providers, since it could be used during the network planning process for facilitating cost-effective networksurvivability design.......In this study, the authors investigate epidemic failure spreading in large-scale transport networks under generalisedmulti-protocol label switching control plane. By evaluating the effect of the epidemic failure spreading on the network,they design several strategies for cost-effective network...... performance improvement via differentiated repair times. First, theyidentify the most vulnerable and the most strategic nodes in the network. Then, via extensive event-driven simulations theyshow that strategic placement of resources for improved failure recovery has better performance than randomly...

  12. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  13. Performance comparison of various time variant filters

    Energy Technology Data Exchange (ETDEWEB)

    Kuwata, M [JEOL Engineering Co. Ltd., Akishima, Tokyo (Japan); Husimi, K

    1996-07-01

    This paper describes the advantage of the trapezoidal filter used in semiconductor detector system comparing with the other time variant filters. The trapezoidal filter is the compose of a rectangular pre-filter and a gated integrator. We indicate that the best performance is obtained by the differential-integral summing type rectangular pre-filter. This filter is not only superior in performance, but also has the useful feature that the rising edge of the output waveform is linear. We introduce an example of this feature used in a high-energy experiment. (author)

  14. Probable late lyme disease: a variant manifestation of untreated Borrelia burgdorferi infection

    Science.gov (United States)

    2012-01-01

    Background Lyme disease, a bacterial infection with the tick-borne spirochete Borrelia burgdorferi, can cause early and late manifestations. The category of probable Lyme disease was recently added to the CDC surveillance case definition to describe patients with serologic evidence of exposure and physician-diagnosed disease in the absence of objective signs. We present a retrospective case series of 13 untreated patients with persistent symptoms of greater than 12 weeks duration who meet these criteria and suggest a label of ‘probable late Lyme disease’ for this presentation. Methods The sample for this analysis draws from a retrospective chart review of consecutive, adult patients presenting between August 2002 and August 2007 to the author (JA), an infectious disease specialist. Patients were included in the analysis if their current illness had lasted greater than or equal to 12 weeks duration at the time of evaluation. Results Probable late Lyme patients with positive IgG serology but no history of previous physician-documented Lyme disease or appropriate Lyme treatment were found to represent 6% of our heterogeneous sample presenting with ≥ 12 weeks of symptom duration. Patients experienced a range of symptoms including fatigue, widespread pain, and cognitive complaints. Approximately one-third of this subset reported a patient-observed rash at illness onset, with a similar proportion having been exposed to non-recommended antibiotics or glucocorticosteroid treatment for their initial disease. A clinically significant response to antibiotics treatment was noted in the majority of patients with probable late Lyme disease, although post-treatment symptom recurrence was common. Conclusions We suggest that patients with probable late Lyme disease share features with both confirmed late Lyme disease and post-treatment Lyme disease syndrome. Physicians should consider the recent inclusion of probable Lyme disease in the CDC Lyme disease surveillance

  15. [Survival analysis with competing risks: estimating failure probability].

    Science.gov (United States)

    Llorca, Javier; Delgado-Rodríguez, Miguel

    2004-01-01

    To show the impact of competing risks of death on survival analysis. We provide an example of survival time without chronic rejection after heart transplantation, where death before rejection acts as a competing risk. Using a computer simulation, we compare the Kaplan-Meier estimator and the multiple decrement model. The Kaplan-Meier method overestimated the probability of rejection. Next, we illustrate the use of the multiple decrement model to analyze secondary end points (in our example: death after rejection). Finally, we discuss Kaplan-Meier assumptions and why they fail in the presence of competing risks. Survival analysis should be adjusted for competing risks of death to avoid overestimation of the risk of rejection produced with the Kaplan-Meier method.

  16. Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression

    Directory of Open Access Journals (Sweden)

    Yingchong Wang

    2015-01-01

    Full Text Available Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating, secondary (steady state creep regime, and tertiary creep (accelerating creep stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0 for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage.

  17. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  18. Prediction of accident sequence probabilities in a nuclear power plant due to earthquake events

    International Nuclear Information System (INIS)

    Hudson, J.M.; Collins, J.D.

    1980-01-01

    This paper presents a methodology to predict accident probabilities in nuclear power plants subject to earthquakes. The resulting computer program accesses response data to compute component failure probabilities using fragility functions. Using logical failure definitions for systems, and the calculated component failure probabilities, initiating event and safety system failure probabilities are synthesized. The incorporation of accident sequence expressions allows the calculation of terminal event probabilities. Accident sequences, with their occurrence probabilities, are finally coupled to a specific release category. A unique aspect of the methodology is an analytical procedure for calculating top event probabilities based on the correlated failure of primary events

  19. Probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP)

    International Nuclear Information System (INIS)

    Greenfield, M.A.; Sargent, T.J.; Stanford Univ., CA

    1998-01-01

    In its most recent report on the annual probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP), the annual failure rate is calculated to be 1.3E(-7)(1/yr), rounded off from 1.32E(-7). A calculation by the Environmental Evaluation Group (EEG) produces a result that is about 4% higher, namely 1.37E(-7)(1/yr). The difference is due to a minor error in the US Department of Energy (DOE) calculations in the Westinghouse 1996 report. WIPP's hoist safety relies on a braking system consisting of a number of components including two crucial valves. The failure rate of the system needs to be recalculated periodically to accommodate new information on component failure, changes in maintenance and inspection schedules, occasional incidents such as a hoist traveling out-of-control, either up or down, and changes in the design of the brake system. This report examines DOE's last two reports on the redesigned waste hoist system. In its calculations, the DOE has accepted one EEG recommendation and is using more current information about the component failures rates, the Nonelectronic Parts Reliability Data (NPRD). However, the DOE calculations fail to include the data uncertainties which are described in detail in the NPRD reports. The US Nuclear Regulatory Commission recommended that a system evaluation include mean estimates of component failure rates and take into account the potential uncertainties that exist so that an estimate can be made on the confidence level to be ascribed to the quantitative results. EEG has made this suggestion previously and the DOE has indicated why it does not accept the NRC recommendation. Hence, this EEG report illustrates the importance of including data uncertainty using a simple statistical example

  20. Probability elicitation to inform early health economic evaluations of new medical technologies: a case study in heart failure disease management.

    Science.gov (United States)

    Cao, Qi; Postmus, Douwe; Hillege, Hans L; Buskens, Erik

    2013-06-01

    Early estimates of the commercial headroom available to a new medical device can assist producers of health technology in making appropriate product investment decisions. The purpose of this study was to illustrate how this quantity can be captured probabilistically by combining probability elicitation with early health economic modeling. The technology considered was a novel point-of-care testing device in heart failure disease management. First, we developed a continuous-time Markov model to represent the patients' disease progression under the current care setting. Next, we identified the model parameters that are likely to change after the introduction of the new device and interviewed three cardiologists to capture the probability distributions of these parameters. Finally, we obtained the probability distribution of the commercial headroom available per measurement by propagating the uncertainty in the model inputs to uncertainty in modeled outcomes. For a willingness-to-pay value of €10,000 per life-year, the median headroom available per measurement was €1.64 (interquartile range €0.05-€3.16) when the measurement frequency was assumed to be daily. In the subsequently conducted sensitivity analysis, this median value increased to a maximum of €57.70 for different combinations of the willingness-to-pay threshold and the measurement frequency. Probability elicitation can successfully be combined with early health economic modeling to obtain the probability distribution of the headroom available to a new medical technology. Subsequently feeding this distribution into a product investment evaluation method enables stakeholders to make more informed decisions regarding to which markets a currently available product prototype should be targeted. Copyright © 2013. Published by Elsevier Inc.

  1. Time dependent non-extinction probability for prompt critical systems

    International Nuclear Information System (INIS)

    Gregson, M. W.; Prinja, A. K.

    2009-01-01

    The time dependent non-extinction probability equation is presented for slab geometry. Numerical solutions are provided for a nested inner/outer iteration routine where the fission terms (both linear and non-linear) are updated and then held fixed over the inner scattering iteration. Time dependent results are presented highlighting the importance of the injection position and angle. The iteration behavior is also described as the steady state probability of initiation is approached for both small and large time steps. Theoretical analysis of the nested iteration scheme is shown and highlights poor numerical convergence for marginally prompt critical systems. An acceleration scheme for the outer iterations is presented to improve convergence of such systems. Theoretical analysis of the acceleration scheme is also provided and the associated decrease in computational run time addressed. (authors)

  2. Combinatorial analysis of systems with competing failures subject to failure isolation and propagation effects

    International Nuclear Information System (INIS)

    Xing Liudong; Levitin, Gregory

    2010-01-01

    This paper considers the reliability analysis of binary-state systems, subject to propagated failures with global effect, and failure isolation phenomena. Propagated failures with global effect are common-cause failures originated from a component of a system/subsystem causing the failure of the entire system/subsystem. Failure isolation occurs when the failure of one component (referred to as a trigger component) causes other components (referred to as dependent components) within the same system to become isolated from the system. On the one hand, failure isolation makes the isolated dependent components unusable; on the other hand, it prevents the propagation of failures originated from those dependent components. However, the failure isolation effect does not exist if failures originated in the dependent components already propagate globally before the trigger component fails. In other words, there exists a competition in the time domain between the failure of the trigger component that causes failure isolation and propagated failures originated from the dependent components. This paper presents a combinatorial method for the reliability analysis of systems subject to such competing propagated failures and failure isolation effect. Based on the total probability theorem, the proposed method is analytical, exact, and has no limitation on the type of time-to-failure distributions for the system components. An illustrative example is given to demonstrate the basics and advantages of the proposed method.

  3. Estimating Recovery Failure Probabilities in Off-normal Situations from Full-Scope Simulator Data

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yochan; Park, Jinkyun; Kim, Seunghwan; Choi, Sun Yeong; Jung, Wondea [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    As part of this effort, KAERI developed the Human Reliability data EXtraction (HuREX) framework and is collecting full-scope simulator-based human reliability data into the OPERA (Operator PErformance and Reliability Analysis) database. In this study, with the series of estimation research for HEPs or PSF effects, significant information for a quantitative HRA analysis, recovery failure probabilities (RFPs), were produced from the OPERA database. Unsafe acts can occur at any time in safety-critical systems and the operators often manage the systems by discovering their errors and eliminating or mitigating them. To model the recovery processes or recovery strategies, there were several researches that categorize the recovery behaviors. Because the recent human error trends are required to be considered during a human reliability analysis, Jang et al. can be seen as an essential effort of the data collection. However, since the empirical results regarding soft controls were produced from a controlled laboratory environment with student participants, it is necessary to analyze a wide range of operator behaviors using full-scope simulators. This paper presents the statistics related with human error recovery behaviors obtained from the full-scope simulations that in-site operators participated in. In this study, the recovery effects by shift changes or technical support centers were not considered owing to a lack of simulation data.

  4. Time Dependence of Collision Probabilities During Satellite Conjunctions

    Science.gov (United States)

    Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.

    2017-01-01

    The NASA Conjunction Assessment Risk Analysis (CARA) team has recently implemented updated software to calculate the probability of collision (P (sub c)) for Earth-orbiting satellites. The algorithm can employ complex dynamical models for orbital motion, and account for the effects of non-linear trajectories as well as both position and velocity uncertainties. This “3D P (sub c)” method entails computing a 3-dimensional numerical integral for each estimated probability. Our analysis indicates that the 3D method provides several new insights over the traditional “2D P (sub c)” method, even when approximating the orbital motion using the relatively simple Keplerian two-body dynamical model. First, the formulation provides the means to estimate variations in the time derivative of the collision probability, or the probability rate, R (sub c). For close-proximity satellites, such as those orbiting in formations or clusters, R (sub c) variations can show multiple peaks that repeat or blend with one another, providing insight into the ongoing temporal distribution of risk. For single, isolated conjunctions, R (sub c) analysis provides the means to identify and bound the times of peak collision risk. Additionally, analysis of multiple actual archived conjunctions demonstrates that the commonly used “2D P (sub c)” approximation can occasionally provide inaccurate estimates. These include cases in which the 2D method yields negligibly small probabilities (e.g., P (sub c)) is greater than 10 (sup -10)), but the 3D estimates are sufficiently large to prompt increased monitoring or collision mitigation (e.g., P (sub c) is greater than or equal to 10 (sup -5)). Finally, the archive analysis indicates that a relatively efficient calculation can be used to identify which conjunctions will have negligibly small probabilities. This small-P (sub c) screening test can significantly speed the overall risk analysis computation for large numbers of conjunctions.

  5. Listeners' processing of a given reduced word pronunciation variant directly reflects their exposure to this variant: Evidence from native listeners and learners of French.

    Science.gov (United States)

    Brand, Sophie; Ernestus, Mirjam

    2018-05-01

    In casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times (RTs) for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners' exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves.

  6. The case of escape probability as linear in short time

    Science.gov (United States)

    Marchewka, A.; Schuss, Z.

    2018-02-01

    We derive rigorously the short-time escape probability of a quantum particle from its compactly supported initial state, which has a discontinuous derivative at the boundary of the support. We show that this probability is linear in time, which seems to be a new result. The novelty of our calculation is the inclusion of the boundary layer of the propagated wave function formed outside the initial support. This result has applications to the decay law of the particle, to the Zeno behaviour, quantum absorption, time of arrival, quantum measurements, and more.

  7. Fuzzy Failure Probability of Transmission Pipelines in the Niger ...

    African Journals Online (AJOL)

    We undertake the apportioning of failure possibility on twelve identified third party activities contributory to failure of transmission pipelines in the Niger Delta region of Nigeria, using the concept of fuzzy possibility scores. Expert elicitation technique generates linguistic variables that are transformed using fuzzy set theory ...

  8. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    Science.gov (United States)

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  9. Calculation of the pipes failure probability of the Rcic system of a nuclear power station by means of software WinPRAISE 07

    International Nuclear Information System (INIS)

    Jasso G, J.; Diaz S, A.; Mendoza G, G.; Sainz M, E.; Garcia de la C, F. M.

    2014-10-01

    The growth and the cracks propagation by fatigue are a typical degradation mechanism that is presented in the nuclear industry as in the conventional industry; the unstable propagation of a crack can cause the catastrophic failure of a metallic component even with high ductility; for this reason, activities of programmed maintenance have been established in the industry using inspection and visual techniques and/or ultrasound with an established periodicity allowing to follow up to these growths, controlling the undesirable effects; however, these activities increase the operation costs; and in the peculiar case of the nuclear industry, they increase the radiation exposure to the participant personnel. The use of mathematical processes that integrate concepts of uncertainty, material properties and the probability associated to the inspection results, has been constituted as a powerful tool of evaluation of the component reliability, reducing costs and exposure levels. In this work the evaluation of the failure probability by cracks growth preexisting by fatigue is presented, in pipes of a Reactor Core Isolation Cooling system (Rcic) in a nuclear power station. The software WinPRAISE 07 (Piping Reliability Analysis Including Seismic Events) was used supported in the probabilistic fracture mechanics principles. The obtained values of failure probability evidenced a good behavior of the analyzed pipes with a maximum order of 1.0 E-6, therefore is concluded that the performance of the lines of these pipes is reliable even extrapolating the calculations at 10, 20, 30 and 40 years of service. (Author)

  10. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  11. Pemodelan Markov Switching Dengan Time-varying Transition Probability

    OpenAIRE

    Savitri, Anggita Puri; Warsito, Budi; Rahmawati, Rita

    2016-01-01

    Exchange rate or currency is an economic variable which reflects country's state of economy. It fluctuates over time because of its ability to switch the condition or regime caused by economic and political factors. The changes in the exchange rate are depreciation and appreciation. Therefore, it could be modeled using Markov Switching with Time-Varying Transition Probability which observe the conditional changes and use information variable. From this model, time-varying transition probabili...

  12. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  13. Use of fault tree technique to determine the failure probability of electrical systems of IE class in nuclear installations

    International Nuclear Information System (INIS)

    Cruz S, W.D.

    1988-01-01

    This paper refers to emergency safety systems of Angra INPP (Brazil 1626 Mw(e)) such as containment, heat removal, emergency removal system, radioactive elements removal from containment environment, berated water infection, etc. Associated with these systems, the failure probability calculation of IE Class bars is achieved, this is a safety classification for electrical equipment essential for the systems mentioned above

  14. Uncertainty about probability: a decision analysis perspective

    International Nuclear Information System (INIS)

    Howard, R.A.

    1988-01-01

    The issue of how to think about uncertainty about probability is framed and analyzed from the viewpoint of a decision analyst. The failure of nuclear power plants is used as an example. The key idea is to think of probability as describing a state of information on an uncertain event, and to pose the issue of uncertainty in this quantity as uncertainty about a number that would be definitive: it has the property that you would assign it as the probability if you knew it. Logical consistency requires that the probability to assign to a single occurrence in the absence of further information be the mean of the distribution of this definitive number, not the medium as is sometimes suggested. Any decision that must be made without the benefit of further information must also be made using the mean of the definitive number's distribution. With this formulation, they find further that the probability of r occurrences in n exchangeable trials will depend on the first n moments of the definitive number's distribution. In making decisions, the expected value of clairvoyance on the occurrence of the event must be at least as great as that on the definitive number. If one of the events in question occurs, then the increase in probability of another such event is readily computed. This means, in terms of coin tossing, that unless one is absolutely sure of the fairness of a coin, seeing a head must increase the probability of heads, in distinction to usual thought. A numerical example for nuclear power shows that the failure of one plant of a group with a low probability of failure can significantly increase the probability that must be assigned to failure of a second plant in the group

  15. Clinical findings and survival time in dogs with advanced heart failure.

    Science.gov (United States)

    Beaumier, Amelie; Rush, John E; Yang, Vicky K; Freeman, Lisa M

    2018-04-10

    Dogs with advanced heart failure are a clinical challenge for veterinarians but there are no studies reporting clinical features and outcome of this population. To describe clinical findings and outcome of dogs with advanced heart failure caused by degenerative mitral valve disease (DMVD). Fifty-four dogs with advanced heart failure because of DMVD. For study purposes, advanced heart failure was defined as recurrence of congestive heart failure signs despite receiving the initially prescribed dose of pimobendan, angiotensin-converting-enzyme inhibitor (ACEI), and furosemide >4 mg/kg/day. Data were collected for the time of diagnosis of Stage C heart failure and time of diagnosis of advanced heart failure. Date of death was recorded. At the diagnosis of advanced heart failure, doses of pimobendan (n = 30), furosemide (n = 28), ACEI (n = 13), and spironolactone (n = 4) were increased, with ≥1 new medications added in most dogs. After initial diagnosis of advanced heart failure, 38 (70%) dogs had additional medications adjustments (median = 2 [range, 0-27]), with the final total medication number ranging from 2-10 (median = 5). Median survival time after diagnosis of advanced heart failure was 281 days (range, 3-885 days). Dogs receiving a furosemide dose >6.70 mg/kg/day had significantly longer median survival times (402 days [range, 3-885 days] versus 129 days [range 9-853 days]; P = .017). Dogs with advanced heart failure can have relatively long survival times. Higher furosemide dose and non-hospitalization were associated with longer survival. Copyright © 2018 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  16. A Semi-Continuous State-Transition Probability HMM-Based Voice Activity Detector

    Directory of Open Access Journals (Sweden)

    H. Othman

    2007-02-01

    Full Text Available We introduce an efficient hidden Markov model-based voice activity detection (VAD algorithm with time-variant state-transition probabilities in the underlying Markov chain. The transition probabilities vary in an exponential charge/discharge scheme and are softly merged with state conditional likelihood into a final VAD decision. Working in the domain of ITU-T G.729 parameters, with no additional cost for feature extraction, the proposed algorithm significantly outperforms G.729 Annex B VAD while providing a balanced tradeoff between clipping and false detection errors. The performance compares very favorably with the adaptive multirate VAD, option 2 (AMR2.

  17. A Semi-Continuous State-Transition Probability HMM-Based Voice Activity Detector

    Directory of Open Access Journals (Sweden)

    Othman H

    2007-01-01

    Full Text Available We introduce an efficient hidden Markov model-based voice activity detection (VAD algorithm with time-variant state-transition probabilities in the underlying Markov chain. The transition probabilities vary in an exponential charge/discharge scheme and are softly merged with state conditional likelihood into a final VAD decision. Working in the domain of ITU-T G.729 parameters, with no additional cost for feature extraction, the proposed algorithm significantly outperforms G.729 Annex B VAD while providing a balanced tradeoff between clipping and false detection errors. The performance compares very favorably with the adaptive multirate VAD, option 2 (AMR2.

  18. Methods, apparatus and system for notification of predictable memory failure

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-01-03

    A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.

  19. Persistent trigeminal artery/persistent trigeminal artery variant and coexisting variants of the head and neck vessels diagnosed using 3 T MRA

    International Nuclear Information System (INIS)

    Bai, M.; Guo, Q.; Li, S.

    2013-01-01

    Aim: To report the prevalence and characteristic features of persistent trigeminal artery (PTA), PTA variant (PTAV), and other variants of the head and neck vessels, identified using magnetic resonance angiography (MRA). Materials and methods: The three-dimensional (3D) time of flight (TOF) MRA and 3D contrast-enhanced (CE) MRA images of 6095 consecutive patients who underwent 3 T MRA at Liaocheng People's Hospital from 1 September 2008 through 31 May 2012 were retrospectively reviewed and analysed. Thirty-two patients were excluded because of suboptimal image quality or internal carotid artery (ICA) occlusion. Results: The prevalence of both PTA and PTAV was 0.63% (PTA, 26 cases; PTAV, 12 cases). The prevalence of coexisting variants of the head and neck vessels in cases of PTA/PTAV was 52.6% (20 of 38 cases). The vascular variants that coexisted with cases of PTA/PTAV were as follows: the intracranial arteries varied in 10 cases, the origin of the supra-aortic arteries varied in nine cases, the vertebral artery (VA) varied in 14 cases, and six cases displayed fenestrations. Fifteen of the 20 cases contained more than two types of variants. Conclusion: The prevalence of both PTA and PTAV was 0.63%. Although PTA and PTAV are rare vascular variants, they frequently coexist with other variants of the head and neck vessels. Multiple vascular variations can coexist in a single patient. Recognizing PTA, PTAV, and other variants of the head and neck vessels is crucial when planning a neuroradiological intervention or surgery. Recognizing the medial PTA is very important in clinical practice when performing trans-sphenoidal surgery on the pituitary as failure to do so could result in massive haemorrhage

  20. Uncertainty analysis with statistically correlated failure data

    International Nuclear Information System (INIS)

    Modarres, M.; Dezfuli, H.; Roush, M.L.

    1987-01-01

    Likelihood of occurrence of the top event of a fault tree or sequences of an event tree is estimated from the failure probability of components that constitute the events of the fault/event tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. At present most fault tree calculations are based on uncorrelated component failure data. This chapter describes a methodology for assessing the probability intervals for the top event failure probability of fault trees or frequency of occurrence of event tree sequences when event failure data are statistically correlated. To estimate mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. Moment matching technique is used to obtain the probability distribution function of the top event through fitting the Johnson Ssub(B) distribution. The computer program, CORRELATE, was developed to perform the calculations necessary for the implementation of the method developed. (author)

  1. A conservative bound for the probability of failure of a 1-out-of-2 protection system with one hardware-only and one software-based protection train

    International Nuclear Information System (INIS)

    Bishop, Peter; Bloomfield, Robin; Littlewood, Bev; Popov, Peter; Povyakalo, Andrey; Strigini, Lorenzo

    2014-01-01

    Redundancy and diversity have long been used as means to obtain high reliability in critical systems. While it is easy to show that, say, a 1-out-of-2 diverse system will be more reliable than each of its two individual “trains”, assessing the actual reliability of such systems can be difficult because the trains cannot be assumed to fail independently. If we cannot claim independence of train failures, the computation of system reliability is difficult, because we would need to know the probability of failure on demand (pfd) for every possible demand. These are unlikely to be known in the case of software. Claims for software often concern its marginalpfd, i.e. average across all possible demands. In this paper we consider the case of a 1-out-of-2 safety protection system in which one train contains software (and hardware), and the other train contains only hardware equipment. We show that a useful upper (i.e. conservative) bound can be obtained for the system pfd using only the unconditional pfd for software together with information about the variation of hardware failure probability across demands, which is likely to be known or estimatable. The worst-case result is obtained by “allocating” software failure probability among demand “classes” so as to maximize system pfd

  2. Failure to thrive in babies and toddlers.

    Science.gov (United States)

    Goh, Lay Hoon; How, Choon How; Ng, Kar Hui

    2016-06-01

    Failure to thrive in a child is defined as 'lack of expected normal physical growth' or 'failure to gain weight'. Diagnosis requires repeated growth measurements over time using local, age-appropriate growth centile charts. Premature babies with appropriate growth velocity and children with 'catch-down' growth, constitutional growth delay or familial short stature show normal growth variants, and usually do not require further evaluation. In Singapore, the most common cause of failure to thrive in children is malnutrition secondary to psychosocial and caregiver factors. 'Picky eating' is common in the local setting and best managed with an authoritative feeding style from caregivers. Other causes are malabsorption and existing congenital or chronic medical conditions. Child neglect or abuse should always be ruled out. Iron deficiency is the most common complication. The family doctor plays a pivotal role in early detection, timely treatment, appropriate referrals and close monitoring of 'catch-up' growth in these children. Copyright: © Singapore Medical Association.

  3. Genetic Variants in Transcription Factors Are Associated With the Pharmacokinetics and Pharmacodynamics of Metformin

    Science.gov (United States)

    Goswami, S; Yee, SW; Stocker, S; Mosley, JD; Kubo, M; Castro, R; Mefford, JA; Wen, C; Liang, X; Witte, J; Brett, C; Maeda, S; Simpson, MD; Hedderson, MM; Davis, RL; Roden, DM; Giacomini, KM; Savic, RM

    2014-01-01

    One-third of type 2 diabetes patients do not respond to metformin. Genetic variants in metformin transporters have been extensively studied as a likely contributor to this high failure rate. Here, we investigate, for the first time, the effect of genetic variants in transcription factors on metformin pharmacokinetics (PK) and response. Overall, 546 patients and healthy volunteers contributed their genome-wide, pharmacokinetic (235 subjects), and HbA1c data (440 patients) for this analysis. Five variants in specificity protein 1 (SP1), a transcription factor that modulates the expression of metformin transporters, were associated with changes in treatment HbA1c (P < 0.01) and metformin secretory clearance (P < 0.05). Population pharmacokinetic modeling further confirmed a 24% reduction in apparent clearance in homozygous carriers of one such variant, rs784888. Genetic variants in other transcription factors, peroxisome proliferator–activated receptor-α and hepatocyte nuclear factor 4-α, were significantly associated with HbA1c change only. Overall, our study highlights the importance of genetic variants in transcription factors as modulators of metformin PK and response. PMID:24853734

  4. Imaging a Time-variant Earthquake Focal Region along an Interplate Boundary

    Science.gov (United States)

    Tsuruga, K.; Kasahara, J.; Hasada, Y.; Fujii, N.

    2010-12-01

    We show a preliminary result of a trial for detecting a time-variant earthquake focal region along an interplate boundary by means of a new imaging method through a numerical simulation. Remarkable seismic reflections from the interplate boundaries of a subducting oceanic plate have been observed in Japan Trench (Mochizuki et al, 2005) and in Nankai Trough (Iidaka et al., 2003). Those strong seismic reflection existing in the current aseismic zones suggest the existence of fluid along the subduction boundary, and it is considered that they closely relate to a future huge earthquake. Seismic ACROSS has a potential to monitor some changes of transfer function along the propagating ray paths, by using an accurately-controlled transmission and receiving of the steady continuous signals repeatedly (Kumazawa et al., 2000). If the physical state in a focal region along the interplate would be changed enough in the time and space, for instance, by increasing or decreasing of fluid flow, we could detect some differences of the amplitude and/or travel-time of the particular reflection phases from the time-variant target region. In this study, we first investigated the seismic characteristics of seismograms and their differences before and after the change of a target region through a numerical simulation. Then, as one of the trials, we attempted to make an image of such time-variant target region by applying a finite-difference back-propagation technique in the time and space to the differences of waveforms (after Kasahara et al., 2010). We here used a 2-D seismic velocity model in the central Japan (Tsuruga et al., 2005), assuming a time-variant target region with a 200-m thickness along a subducting Philippine Sea plate at 30 km in depth. Seismograms were calculated at a 500-m interval for 260 km long by using FDM software (Larsen, 2000), in the case that P- and S-wave velocities (Vp amd Vs) in the target region decreased about 30 % before to after the change (e.g., Vp=3

  5. A technique for estimating the probability of radiation-stimulated failures of integrated microcircuits in low-intensity radiation fields: Application to the Spektr-R spacecraft

    Science.gov (United States)

    Popov, V. D.; Khamidullina, N. M.

    2006-10-01

    In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.

  6. Modelling the failure risk for water supply networks with interval-censored data

    International Nuclear Information System (INIS)

    García-Mora, B.; Debón, A.; Santamaría, C.; Carrión, A.

    2015-01-01

    In reliability, sometimes some failures are not observed at the exact moment of the occurrence. In that case it can be more convenient to approximate them by a time interval. In this study, we have used a generalized non-linear model developed for interval-censored data to treat the life time of a pipe from its time of installation until its failure. The aim of this analysis was to identify those network characteristics that may affect the risk of failure and we make an exhaustive validation of this analysis. The results indicated that certain characteristics of the network negatively affected the risk of failure of the pipe: an increase in the length and pressure of the pipes, a small diameter, some materials used in the manufacture of pipes and the traffic on the street where the pipes are located. Once the model has been correctly fitted to our data, we also provided simple tables that will allow companies to easily calculate the pipe's probability of failure in a future. - Highlights: • We model the first failure time in a water supply company from Spain. • We fit arbitrarily interval-censored data with a generalized non-linear model. • The results are validated. We provide simple tables to easily calculate probabilities of no failure at different times.

  7. SOME SIMPLE APPROACHES TO PLANNING THE INVENTORY OF SPARE COMPONENTS OF AN INDUSTRIAL SYSTEM

    Directory of Open Access Journals (Sweden)

    Alenka Brezavšek

    2016-02-01

    Full Text Available Two variants of a simple stochastic model for planning the inventory of spare components supporting maintenance of an industrial system are developed. In both variants, the aim is to determine how many spare components are needed at the beginning of a planning interval to fulfil demand for corrective replacements during this interval. Under the first variant, the acceptable probability of spare shortage during the planning interval is chosen as a decision variable while in the second variant, the adequate spare inventory level is assessed taking into account the expected number of component failures within the planning interval. Calculation of the number of spare components needed depends on the form of the probability density function of component failure times. Different statistical density functions that are useful to describe this function are presented. Advantages and disadvantages of using a particular density function in our model are discussed. The applicability of the model is given through illustrative numerical examples.

  8. Reliability of structures by using probability and fatigue theories

    International Nuclear Information System (INIS)

    Lee, Ouk Sub; Kim, Dong Hyeok; Park, Yeon Chang

    2008-01-01

    Methodologies to calculate failure probability and to estimate the reliability of fatigue loaded structures are developed. The applicability of the methodologies is evaluated with the help of the fatigue crack growth models suggested by Paris and Walker. The probability theories such as the FORM (first order reliability method), the SORM (second order reliability method) and the MCS (Monte Carlo simulation) are utilized. It is found that the failure probability decreases with the increase of the design fatigue life and the applied minimum stress, the decrease of the initial edge crack size, the applied maximum stress and the slope of Paris equation. Furthermore, according to the sensitivity analysis of random variables, the slope of Pairs equation affects the failure probability dominantly among other random variables in the Paris and the Walker models

  9. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  10. A Recurrent De Novo Variant in NACC1 Causes a Syndrome Characterized by Infantile Epilepsy, Cataracts, and Profound Developmental Delay.

    Science.gov (United States)

    Schoch, Kelly; Meng, Linyan; Szelinger, Szabolcs; Bearden, David R; Stray-Pedersen, Asbjorg; Busk, Oyvind L; Stong, Nicholas; Liston, Eriskay; Cohn, Ronald D; Scaglia, Fernando; Rosenfeld, Jill A; Tarpinian, Jennifer; Skraban, Cara M; Deardorff, Matthew A; Friedman, Jeremy N; Akdemir, Zeynep Coban; Walley, Nicole; Mikati, Mohamad A; Kranz, Peter G; Jasien, Joan; McConkie-Rosell, Allyn; McDonald, Marie; Wechsler, Stephanie Burns; Freemark, Michael; Kansagra, Sujay; Freedman, Sharon; Bali, Deeksha; Millan, Francisca; Bale, Sherri; Nelson, Stanley F; Lee, Hane; Dorrani, Naghmeh; Goldstein, David B; Xiao, Rui; Yang, Yaping; Posey, Jennifer E; Martinez-Agosto, Julian A; Lupski, James R; Wangler, Michael F; Shashi, Vandana

    2017-02-02

    Whole-exome sequencing (WES) has increasingly enabled new pathogenic gene variant identification for undiagnosed neurodevelopmental disorders and provided insights into both gene function and disease biology. Here, we describe seven children with a neurodevelopmental disorder characterized by microcephaly, profound developmental delays and/or intellectual disability, cataracts, severe epilepsy including infantile spasms, irritability, failure to thrive, and stereotypic hand movements. Brain imaging in these individuals reveals delay in myelination and cerebral atrophy. We observe an identical recurrent de novo heterozygous c.892C>T (p.Arg298Trp) variant in the nucleus accumbens associated 1 (NACC1) gene in seven affected individuals. One of the seven individuals is mosaic for this variant. NACC1 encodes a transcriptional repressor implicated in gene expression and has not previously been associated with germline disorders. The probability of finding the same missense NACC1 variant by chance in 7 out of 17,228 individuals who underwent WES for diagnoses of neurodevelopmental phenotypes is extremely small and achieves genome-wide significance (p = 1.25 × 10 -14 ). Selective constraint against missense variants in NACC1 makes this excess of an identical missense variant in all seven individuals more remarkable. Our findings are consistent with a germline recurrent mutational hotspot associated with an allele-specific neurodevelopmental phenotype in NACC1. Copyright © 2017 American Society of Human Genetics. All rights reserved.

  11. Influence of reinforcement's corrosion into hyperstatic reinforced concrete beams: a probabilistic failure scenarios analysis

    Directory of Open Access Journals (Sweden)

    G. P. PELLIZZER

    Full Text Available AbstractThis work aims to study the mechanical effects of reinforcement's corrosion in hyperstatic reinforced concrete beams. The focus is the probabilistic determination of individual failure scenarios change as well as global failure change along time. The limit state functions assumed describe analytically bending and shear resistance of reinforced concrete rectangular cross sections as a function of steel and concrete resistance and section dimensions. It was incorporated empirical laws that penalize the steel yield stress and the reinforcement's area along time in addition to Fick's law, which models the chloride penetration into concrete pores. The reliability theory was applied based on Monte Carlo simulation method, which assesses each individual probability of failure. The probability of global structural failure was determined based in the concept of failure tree. The results of a hyperstatic reinforced concrete beam showed that reinforcements corrosion make change into the failure scenarios modes. Therefore, unimportant failure modes in design phase become important after corrosion start.

  12. PWR reactor pressure vessel failure probabilities

    International Nuclear Information System (INIS)

    Dufresne, J.; Lanore, J.M.; Lucia, A.C.; Elbaz, J.; Brunnhuber, R.

    1980-05-01

    To evaluate the rupture probability of a LWR vessel a probabilistic method using the fracture mechanics under probabilistic form has been proposed previously, but it appears that more accurate evaluation is possible. In consequence a joint collaboration agreement signed in 1976 between CEA, EURATOM, JRC Ispra and FRAMATOME set up and started a research program covering three parts: a computer code development, data acquisition and processing, and a support experimental program which aims at clarifying the most important parameters used in the COVASTOL computer code

  13. Integrating functional data to prioritize causal variants in statistical fine-mapping studies.

    Directory of Open Access Journals (Sweden)

    Gleb Kichaev

    2014-10-01

    Full Text Available Standard statistical approaches for prioritization of variants for functional testing in fine-mapping studies either use marginal association statistics or estimate posterior probabilities for variants to be causal under simplifying assumptions. Here, we present a probabilistic framework that integrates association strength with functional genomic annotation data to improve accuracy in selecting plausible causal variants for functional validation. A key feature of our approach is that it empirically estimates the contribution of each functional annotation to the trait of interest directly from summary association statistics while allowing for multiple causal variants at any risk locus. We devise efficient algorithms that estimate the parameters of our model across all risk loci to further increase performance. Using simulations starting from the 1000 Genomes data, we find that our framework consistently outperforms the current state-of-the-art fine-mapping methods, reducing the number of variants that need to be selected to capture 90% of the causal variants from an average of 13.3 to 10.4 SNPs per locus (as compared to the next-best performing strategy. Furthermore, we introduce a cost-to-benefit optimization framework for determining the number of variants to be followed up in functional assays and assess its performance using real and simulation data. We validate our findings using a large scale meta-analysis of four blood lipids traits and find that the relative probability for causality is increased for variants in exons and transcription start sites and decreased in repressed genomic regions at the risk loci of these traits. Using these highly predictive, trait-specific functional annotations, we estimate causality probabilities across all traits and variants, reducing the size of the 90% confidence set from an average of 17.5 to 13.5 variants per locus in this data.

  14. High affinity complexes of pannexin channels and L-type calcium channel splice-variants in human lung: Possible role in clevidipine-induced dyspnea relief in acute heart failure

    Directory of Open Access Journals (Sweden)

    Gerhard P. Dahl

    2016-08-01

    Research in Context: Clevidipine lowers blood pressure by inhibiting calcium channels in vascular smooth muscle. In patients with acute heart failure, clevidipine was shown to relieve breathing problems. This was only partially related to the blood pressure lowering actions of clevidipine and not conferred by another calcium channel inhibitor. We here found calcium channel variants in human lung that are more selectively inhibited by clevidipine, especially when associated with pannexin channels. This study gives a possible mechanism for clevidipine's relief of breathing problems and supports future clinical trials testing the role of clevidipine in the treatment of acute heart failure.

  15. Semiparametric regression analysis of failure time data with dependent interval censoring.

    Science.gov (United States)

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Presynaptic congenital myasthenic syndrome with a homozygous sequence variant in LAMA5 combines myopia, facial tics, and failure of neuromuscular transmission.

    Science.gov (United States)

    Maselli, Ricardo A; Arredondo, Juan; Vázquez, Jessica; Chong, Jessica X; Bamshad, Michael J; Nickerson, Deborah A; Lara, Marian; Ng, Fiona; Lo, Victoria L; Pytel, Peter; McDonald, Craig M

    2017-08-01

    Defects in genes encoding the isoforms of the laminin alpha subunit have been linked to various phenotypic manifestations, including brain malformations, muscular dystrophy, ocular defects, cardiomyopathy, and skin abnormalities. We report here a severe defect of neuromuscular transmission in a consanguineous patient with a homozygous variant in the laminin alpha-5 subunit gene (LAMA5). The variant c.8046C>T (p.Arg2659Trp) is rare and has a predicted deleterious effect. The affected individual, who also carries a rare homozygous sequence variant in LAMA1, had muscle weakness, myopia, and facial tics. Magnetic resonance imaging of brain showed mild volume loss and periventricular T2 prolongation. Repetitive nerve stimulation revealed 50% decrement of compound muscle action potential amplitudes and 250% facilitation immediately after exercise, Endplate studies identified a profound reduction of the endplate potential quantal content and endplates with normal postsynaptic folding that were denuded or partially occupied by small nerve terminals. Expression studies revealed that p.Arg2659Trp caused decreased binding of laminin alpha-5 to SV2A and impaired laminin-521 cell-adhesion and cell projection support in primary neuronal cultures. In summary, this report describing severe neuromuscular transmission failure in a patient with a LAMA5 mutation expands the list of phenotypes associated with defects in genes encoding alpha-laminins. © 2017 Wiley Periodicals, Inc.

  17. Deep sequencing analysis of HIV-1 reverse transcriptase at baseline and time of failure in patients receiving rilpivirine in the phase III studies ECHO and THRIVE.

    Science.gov (United States)

    Van Eygen, Veerle; Thys, Kim; Van Hove, Carl; Rimsky, Laurence T; De Meyer, Sandra; Aerssens, Jeroen; Picchio, Gaston; Vingerhoets, Johan

    2016-05-01

    Minority variants (1.0-25.0%) were evaluated by deep sequencing (DS) at baseline and virological failure (VF) in a selection of antiretroviral treatment-naïve, HIV-1-infected patients from the rilpivirine ECHO/THRIVE phase III studies. Linkage between frequently emerging resistance-associated mutations (RAMs) was determined. DS (llIumina®) and population sequencing (PS) results were available at baseline for 47 VFs and time of failure for 48 VFs; and at baseline for 49 responders matched for baseline characteristics. Minority mutations were accurately detected at frequencies down to 1.2% of the HIV-1 quasispecies. No baseline minority rilpivirine RAMs were detected in VFs; one responder carried 1.9% F227C. Baseline minority mutations associated with resistance to other non-nucleoside reverse transcriptase inhibitors (NNRTIs) were detected in 8/47 VFs (17.0%) and 7/49 responders (14.3%). Baseline minority nucleoside/nucleotide reverse transcriptase inhibitor (NRTI) RAMs M184V and L210W were each detected in one VF (none in responders). At failure, two patients without NNRTI RAMs by PS carried minority rilpivirine RAMs K101E and/or E138K; and five additional patients carried other minority NNRTI RAMs V90I, V106I, V179I, V189I, and Y188H. Overall at failure, minority NNRTI RAMs and NRTI RAMs were found in 29/48 (60.4%) and 16/48 VFs (33.3%), respectively. Linkage analysis showed that E138K and K101E were usually not observed on the same viral genome. In conclusion, baseline minority rilpivirine RAMs and other NNRTI/NRTI RAMs were uncommon in the rilpivirine arm of the ECHO and THRIVE studies. DS at failure showed emerging NNRTI resistant minority variants in seven rilpivirine VFs who had no detectable NNRTI RAMs by PS. © 2015 Wiley Periodicals, Inc.

  18. Least-cost failure diagnosis in uncertain reliability systems

    International Nuclear Information System (INIS)

    Cox, Louis Anthony; Chiu, Steve Y.; Sun Xiaorong

    1996-01-01

    In many textbook solutions, for systems failure diagnosis problems studied using reliability theory and artificial intelligence, the prior probabilities of different failure states can be estimated and used to guide the sequential search for failed components after the whole system fails. In practice, however, both the component failure probabilities and the structure function of the system being examined--i.e., the mapping between the states of its components and the state of the system--may not be known with certainty. At best:, the probabilities of different hypothesized system descriptions, each specifying the component failure probabilities and the system's structure function, may be known to a useful approximation, perhaps based on sample data and previous experience. Cost-effective diagnosis of the system's failure state is then a challenging problem. Although the probabilities of component failures are aleatory, uncertainties about these probabilities and about the system structure function are epistemic. This paper examines how to make best use of both epistemic prior probabilities for system descriptions and the information gleaned from costly inspections of component states after the system fails, to minimize the average cost of identifying the failure state. Two approaches are introduced for systems dominated by aleatory uncertainties, one motivated by information theory and the other based on the idea of trying to prove a hypothesis about the identity of the failure state as efficiently as possible. While the general problem of cost-effective failure diagnosis is computationally intractable (NP-hard), both heuristics provide useful approximations on small to moderate sized problems and optimal results for certain common types of reliability systems, including series, parallel, parallel-series, and k-out-of-n systems. A hybrid heuristic that adaptively chooses which heuristic to apply next after any sequence of observations (component test results

  19. Adaptive lattice decision-feedback equalizers - Their performance and application to time-variant multipath channnels

    Science.gov (United States)

    Ling, F.; Proakis, J. G.

    1985-04-01

    This paper presents two types of adaptive lattice decision-feedback equalizers (DFE), the least squares (LS) lattice DFE and the gradient lattice DFE. Their performance has been investigated on both time-invariant and time-variant channels through computer simulations and compared to other kinds of equalizers. An analysis of the self-noise and tracking characteristics of the LS DFE and the DFE employing the Widrow-Hoff least mean square adaptive algorithm (LMS DFE) are also given. The analysis and simulation results show that the LS lattice DFE has the faster initial convergence rate, while the gradient lattice DFE is computationally more efficient. The main advantages of the lattice DFE's are their numerical stability, their computational efficiency, the flexibility to change their length, and their excellent capabilities for tracking rapidly time-variant channels.

  20. Application of Probability Calculations to the Study of the Permissible Step and Touch Potentials to Ensure Personnel Safety

    International Nuclear Information System (INIS)

    Eisawy, E.A.

    2011-01-01

    The aim of this paper is to develop a practical method to evaluate the actual step and touch potential distributions in order to determine the risk of failure of the grounding system. The failure probability, indicating the safety level of the grounding system, is related to both applied (stress) and withstand (strength) step or touch potentials. The probability distributions of the applied step and touch potentials as well as the corresponding withstand step and touch potentials which represent the capability of the human body to resist stress potentials are presented. These two distributions are used to evaluate the failure probability of the grounding system which denotes the probability that the applied potential exceeds the withstand potential. The method is accomplished in considering the resistance of the human body, the foot contact resistance and the fault clearing time as an independent random variables, rather than fixed values as treated in the previous analysis in determining the safety requirements for a given grounding system

  1. Reliability-based failure cause assessment of collapsed bridge during construction

    International Nuclear Information System (INIS)

    Choi, Hyun-Ho; Lee, Sang-Yoon; Choi, Il-Yoon; Cho, Hyo-Nam; Mahadevan, Sankaran

    2006-01-01

    Until now, in many forensic reports, the failure cause assessments are usually carried out by a deterministic approach so far. However, it may be possible for the forensic investigation to lead to unreasonable results far from the real collapse scenario, because the deterministic approach does not systematically take into account any information on the uncertainties involved in the failures of structures. Reliability-based failure cause assessment (reliability-based forensic engineering) methodology is developed which can incorporate the uncertainties involved in structural failures and structures, and to apply them to the collapsed bridge in order to identify the most critical failure scenario and find the cause that triggered the bridge collapse. Moreover, to save the time and cost of evaluation, an algorithm of automated event tree analysis (ETA) is proposed and possible to automatically calculate the failure probabilities of the failure events and the occurrence probabilities of failure scenarios. Also, for reliability analysis, uncertainties are estimated more reasonably by using the Bayesian approach based on the experimental laboratory testing data in the forensic report. For the applicability, the proposed approach is applied to the Hang-ju Grand Bridge, which collapsed during construction, and compared with deterministic approach

  2. A procedure for estimation of pipe break probabilities due to IGSCC

    International Nuclear Information System (INIS)

    Bergman, M.; Brickstad, B.; Nilsson, F.

    1998-06-01

    A procedure has been developed for estimation of the failure probability of welds joints in nuclear piping susceptible to intergranular stress corrosion cracking. The procedure aims at a robust and rapid estimate of the failure probability for a specific weld with known stress state. Random properties are taken into account of the crack initiation rate, the initial crack length, the in-service inspection efficiency and the leak rate. A computer realization of the procedure has been developed for user friendly applications by design engineers. Some examples are considered to investigate the sensitivity of the failure probability to different input quantities. (au)

  3. Failure Predictions for Graphite Reflector Bricks in the Very High Temperature Reactor with the Prismatic Core Design

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Gyanender, E-mail: sing0550@umn.edu [Department of Mechanical Engineering, University of Minnesota, 111, Church St. SE, Minneapolis, MN 55455 (United States); Fok, Alex [Minnesota Dental Research in Biomaterials and Biomechanics, School of Dentistry, University of Minnesota, 515, Delaware St. SE, Minneapolis, MN 55455 (United States); Department of Mechanical Engineering, University of Minnesota, 111, Church St. SE, Minneapolis, MN 55455 (United States); Mantell, Susan [Department of Mechanical Engineering, University of Minnesota, 111, Church St. SE, Minneapolis, MN 55455 (United States)

    2017-06-15

    Highlights: • Failure probability of VHTR reflector bricks predicted though crack modeling. • Criterion chosen for defining failure strongly affects the predictions. • Breaching of the CRC could be significantly delayed through crack arrest. • Capability to predict crack initiation and propagation demonstrated. - Abstract: Graphite is used in nuclear reactor cores as a neutron moderator, reflector and structural material. The dimensions and physical properties of graphite change when it is exposed to neutron irradiation. The non-uniform changes in the dimensions and physical properties lead to the build-up of stresses over the course of time in the core components. When the stresses reach the critical limit, i.e. the strength of the material, cracking occurs and ultimately the components fail. In this paper, an explicit crack modeling approach to predict the probability of failure of a VHTR prismatic reactor core reflector brick is presented. Firstly, a constitutive model for graphite is constructed and used to predict the stress distribution in the reflector brick under in-reactor conditions of high temperature and irradiation. Fracture simulations are performed as part of a Monte Carlo analysis to predict the probability of failure. Failure probability is determined based on two different criteria for defining failure time: A) crack initiation and B) crack extension to near control rod channel. A significant difference is found between the failure probabilities based on the two criteria. It is predicted that the reflector bricks will start cracking during the time range of 5–9 years, while breaching of the control rod channels will occur during the period of 11–16 years. The results show that, due to crack arrest, there is a significantly delay between crack initiation and breaching of the control rod channel.

  4. Timing analysis of PWR fuel pin failures

    International Nuclear Information System (INIS)

    Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J.; Straka, M.

    1992-09-01

    Research has been conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B ampersand W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin bumup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PFL/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B ampersand W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B ampersand W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design bumup. Using peaking factors commensurate widi actual bumups would result in longer intervals for both reactor designs. This document also contains appendices A through J of this report

  5. Optimal Release Time and Sensitivity Analysis Using a New NHPP Software Reliability Model with Probability of Fault Removal Subject to Operating Environments

    Directory of Open Access Journals (Sweden)

    Kwang Yoon Song

    2018-05-01

    Full Text Available With the latest technological developments, the software industry is at the center of the fourth industrial revolution. In today’s complex and rapidly changing environment, where software applications must be developed quickly and easily, software must be focused on rapidly changing information technology. The basic goal of software engineering is to produce high-quality software at low cost. However, because of the complexity of software systems, software development can be time consuming and expensive. Software reliability models (SRMs are used to estimate and predict the reliability, number of remaining faults, failure intensity, total and development cost, etc., of software. Additionally, it is very important to decide when, how, and at what cost to release the software to users. In this study, we propose a new nonhomogeneous Poisson process (NHPP SRM with a fault detection rate function affected by the probability of fault removal on failure subject to operating environments and discuss the optimal release time and software reliability with the new NHPP SRM. The example results show a good fit to the proposed model, and we propose an optimal release time for a given change in the proposed model.

  6. Time of Arrival Estimation in Probability-Controlled Generalized CDMA Systems

    Directory of Open Access Journals (Sweden)

    Hagit Messer

    2007-11-01

    Full Text Available In recent years, more and more wireless communications systems are required to provide also a positioning measurement. In code division multiple access (CDMA communication systems, the positioning accuracy is significantly degraded by the multiple access interference (MAI caused by other users in the system. This MAI is commonly managed by a power control mechanism, and yet, MAI has a major effect on positioning accuracy. Probability control is a recently introduced interference management mechanism. In this mechanism, a user with excess power chooses not to transmit some of its symbols. The information in the nontransmitted symbols is recovered by an error-correcting code (ECC, while all other users receive a more reliable data during these quiet periods. Previous research had shown that the implementation of a probability control mechanism can significantly reduce the MAI. In this paper, we show that probability control also improves the positioning accuracy. We focus on time-of-arrival (TOA based positioning systems. We analyze the TOA estimation performance in a generalized CDMA system, in which the probability control mechanism is employed, where the transmitted signal is noncontinuous with a symbol transmission probability smaller than 1. The accuracy of the TOA estimation is determined using appropriate modifications of the Cramer-Rao bound on the delay estimation. Keeping the average transmission power constant, we show that the TOA accuracy of each user does not depend on its transmission probability, while being a nondecreasing function of the transmission probability of any other user. Therefore, a generalized, noncontinuous CDMA system with a probability control mechanism can always achieve better positioning performance, for all users in the network, than a conventional, continuous, CDMA system.

  7. Collapsing variant of focal segmental glomerulosclerosis in children = Variante colapsante de la glomeruloesclerosis focal y segmentaria en niños

    Directory of Open Access Journals (Sweden)

    Jonh Fredy Nieto Ríos

    2013-10-01

    Full Text Available Focal segmental glomerulosclerosis (FSGS collapsing variant refers to a renal lesion that may be idiopathic or associated to some factors, characterized by glomerular collapse leading to cortico-resistant nephrotic syndrome and chronic progressive renal failure. This glomerulopathy has been scarcely studied in children, but in this age group most cases are idiopathic. In this paper we present a report of 6 cases of collapsing variant of FSGS in children no associated to HIV, which were resistant to immunosuppressive management and with a high mortality

  8. Predictors of treatment failure and time to detection and switching in HIV-infected Ethiopian children receiving first line anti-retroviral therapy

    Directory of Open Access Journals (Sweden)

    Bacha Tigist

    2012-08-01

    Full Text Available Abstract Background The emergence of resistance to first line antiretroviral therapy (ART regimen leads to the need for more expensive and less tolerable second line drugs. Hence, it is essential to identify and address factors associated with an increased probability of first line ART regimen failure. The objective of this article is to report on the predictors of first line ART regimen failure, the detection rate of ART regime failure, and the delay in switching to second line ART drugs. Methods A retrospective cohort study was conducted from 2005 to 2011. All HIV infected children under the age of 15 who took first line ART for at least six months at the four major hospitals of Addis Ababa, Ethiopia were included. Data were collected, entered and analyzed using Epi info/ENA version 3.5.1 and SPSS version 16. The Cox proportional-hazard model was used to assess the predictors of first line ART failure. Results Data of 1186 children were analyzed. Five hundred seventy seven (48.8% were males with a mean age of 6.22 (SD = 3.10 years. Of the 167(14.1% children who had treatment failure, 70 (5.9% had only clinical failure, 79 (6.7% had only immunologic failure, and 18 (1.5% had both clinical and immunologic failure. Patients who had height for age in the third percentile or less at initiation of ART were found to have higher probability of ART treatment failure [Adjusted Hazard Ratio (AHR, 3.25 95% CI, 1.00-10.58]. Patients who were less than three years old [AHR, 1.85 95% CI, 1.24-2.76], chronic diarrhea after initiation of antiretroviral treatment [AHR, 3.44 95% CI, 1.37-8.62], ART drug substitution [AHR, 1.70 95% CI, 1.05-2.73] and base line CD4 count below 50 cells/mm3 [AHR, 2.30 95% CI, 1.28-4.14] were also found to be at higher risk of treatment failure. Of all the 167 first line ART failure cases, only 24 (14.4% were switched to second line ART with a mean delay of 24 (SD = 11.67 months. The remaining 143 (85.6% cases were diagnosed

  9. Posterior Probability Matching and Human Perceptual Decision Making.

    Directory of Open Access Journals (Sweden)

    Richard F Murray

    2015-06-01

    Full Text Available Probability matching is a classic theory of decision making that was first developed in models of cognition. Posterior probability matching, a variant in which observers match their response probabilities to the posterior probability of each response being correct, is being used increasingly often in models of perception. However, little is known about whether posterior probability matching is consistent with the vast literature on vision and hearing that has developed within signal detection theory. Here we test posterior probability matching models using two tools from detection theory. First, we examine the models' performance in a two-pass experiment, where each block of trials is presented twice, and we measure the proportion of times that the model gives the same response twice to repeated stimuli. We show that at low performance levels, posterior probability matching models give highly inconsistent responses across repeated presentations of identical trials. We find that practised human observers are more consistent across repeated trials than these models predict, and we find some evidence that less practised observers more consistent as well. Second, we compare the performance of posterior probability matching models on a discrimination task to the performance of a theoretical ideal observer that achieves the best possible performance. We find that posterior probability matching is very inefficient at low-to-moderate performance levels, and that human observers can be more efficient than is ever possible according to posterior probability matching models. These findings support classic signal detection models, and rule out a broad class of posterior probability matching models for expert performance on perceptual tasks that range in complexity from contrast discrimination to symmetry detection. However, our findings leave open the possibility that inexperienced observers may show posterior probability matching behaviour, and our methods

  10. Application of importance sampling method in sliding failure simulation of caisson breakwaters

    Science.gov (United States)

    Wang, Yu-chi; Wang, Yuan-zhan; Li, Qing-mei; Chen, Liang-zhi

    2016-06-01

    It is assumed that the storm wave takes place once a year during the design period, and N histories of storm waves are generated on the basis of wave spectrum corresponding to the N-year design period. The responses of the breakwater to the N histories of storm waves in the N-year design period are calculated by mass-spring-dashpot mode and taken as a set of samples. The failure probability of caisson breakwaters during the design period of N years is obtained by the statistical analysis of many sets of samples. It is the key issue to improve the efficiency of the common Monte Carlo simulation method in the failure probability estimation of caisson breakwaters in the complete life cycle. In this paper, the kernel method of importance sampling, which can greatly increase the efficiency of failure probability calculation of caisson breakwaters, is proposed to estimate the failure probability of caisson breakwaters in the complete life cycle. The effectiveness of the kernel method is investigated by an example. It is indicated that the calculation efficiency of the kernel method is over 10 times the common Monte Carlo simulation method.

  11. Women-specific risk factors for heart failure: A genetic approach.

    Science.gov (United States)

    van der Kemp, Jet; van der Schouw, Yvonne T; Asselbergs, Folkert W; Onland-Moret, N Charlotte

    2018-03-01

    Heart failure is a complex disease, which is presented differently by men and women. Several studies have shown that reproductive factors, such as age at natural menopause, parity and polycystic ovarian syndrome (PCOS), may play a role in the development of heart failure. Shared genetics may provide clues to underlying mechanisms; however, this has never been examined. Therefore, the aim of the current study was to explore whether any reproductive factor is potentially related to heart failure in women, based on genetic similarities. Conducting a systematic literature review, single nucleotide polymorphisms (SNPs) associated with reproductive factors, heart failure and its risk factors were extracted from recent genome-wide association studies. We tested whether there was any overlap between the SNPs and their proxies of reproductive risk factors with those known for heart failure or its risk factors. In total, 520 genetic variants were found that are associated with reproductive factors, namely age at menarche, age at natural menopause, menstrual cycle length, PCOS, preeclampsia, preterm delivery and spontaneous dizygotic twinning. For heart failure and associated phenotypes, 25 variants were found. Genetic variants for reproductive factors did not overlap with those for heart failure. However, age at menarche, gestational diabetes and PCOS were found to be genetically linked to risk factors for heart failure, such as atrial fibrillation, diabetes and smoking. Corresponding implicated genes, such as TNNI3K, ErbB3, MKL2, MTNR1B and PRKD1, may explain the associations between reproductive factors and heart failure. Exact effector mechanisms of these genes remain to be investigated further. Copyright © 2017. Published by Elsevier B.V.

  12. Probability Learning: Changes in Behavior across Time and Development

    Science.gov (United States)

    Plate, Rista C.; Fulvio, Jacqueline M.; Shutts, Kristin; Green, C. Shawn; Pollak, Seth D.

    2018-01-01

    Individuals track probabilities, such as associations between events in their environments, but less is known about the degree to which experience--within a learning session and over development--influences people's use of incoming probabilistic information to guide behavior in real time. In two experiments, children (4-11 years) and adults…

  13. Failure analysis of the cement mantle in total hip arthroplasty with an efficient probabilistic method.

    Science.gov (United States)

    Kaymaz, Irfan; Bayrak, Ozgu; Karsan, Orhan; Celik, Ayhan; Alsaran, Akgun

    2014-04-01

    Accurate prediction of long-term behaviour of cemented hip implants is very important not only for patient comfort but also for elimination of any revision operation due to failure of implants. Therefore, a more realistic computer model was generated and then used for both deterministic and probabilistic analyses of the hip implant in this study. The deterministic failure analysis was carried out for the most common failure states of the cement mantle. On the other hand, most of the design parameters of the cemented hip are inherently uncertain quantities. Therefore, the probabilistic failure analysis was also carried out considering the fatigue failure of the cement mantle since it is the most critical failure state. However, the probabilistic analysis generally requires large amount of time; thus, a response surface method proposed in this study was used to reduce the computation time for the analysis of the cemented hip implant. The results demonstrate that using an efficient probabilistic approach can significantly reduce the computation time for the failure probability of the cement from several hours to minutes. The results also show that even the deterministic failure analyses do not indicate any failure of the cement mantle with high safety factors, the probabilistic analysis predicts the failure probability of the cement mantle as 8%, which must be considered during the evaluation of the success of the cemented hip implants.

  14. System reliability analysis using dominant failure modes identified by selective searching technique

    International Nuclear Information System (INIS)

    Kim, Dong-Seok; Ok, Seung-Yong; Song, Junho; Koh, Hyun-Moo

    2013-01-01

    The failure of a redundant structural system is often described by innumerable system failure modes such as combinations or sequences of local failures. An efficient approach is proposed to identify dominant failure modes in the space of random variables, and then perform system reliability analysis to compute the system failure probability. To identify dominant failure modes in the decreasing order of their contributions to the system failure probability, a new simulation-based selective searching technique is developed using a genetic algorithm. The system failure probability is computed by a multi-scale matrix-based system reliability (MSR) method. Lower-scale MSR analyses evaluate the probabilities of the identified failure modes and their statistical dependence. A higher-scale MSR analysis evaluates the system failure probability based on the results of the lower-scale analyses. Three illustrative examples demonstrate the efficiency and accuracy of the approach through comparison with existing methods and Monte Carlo simulations. The results show that the proposed method skillfully identifies the dominant failure modes, including those neglected by existing approaches. The multi-scale MSR method accurately evaluates the system failure probability with statistical dependence fully considered. The decoupling between the failure mode identification and the system reliability evaluation allows for effective applications to larger structural systems

  15. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  16. A probabilistic approach for RIA fuel failure criteria

    International Nuclear Information System (INIS)

    Carlo Vitanza, Dr.

    2008-01-01

    Substantial experimental data have been produced in support of the definition of the RIA safety limits for water reactor fuels at high burn up. Based on these data, fuel failure enthalpy limits can be derived based on methods having a varying degree of complexity. However, regardless of sophistication, it is unlikely that any deterministic approach would result in perfect predictions of all failure and non failure data obtained in RIA tests. Accordingly, a probabilistic approach is proposed in this paper, where in addition to a best estimate evaluation of the failure enthalpy, a RIA fuel failure probability distribution is defined within an enthalpy band surrounding the best estimate failure enthalpy. The band width and the failure probability distribution within this band are determined on the basis of the whole data set, including failure and non failure data and accounting for the actual scatter of the database. The present probabilistic approach can be used in conjunction with any deterministic model or correlation. For deterministic models or correlations having good prediction capability, the probability distribution will be sharply increasing within a narrow band around the best estimate value. For deterministic predictions of lower quality, instead, the resulting probability distribution will be broad and coarser

  17. Evaluation of Failure Probability of BWR Vessel Under Cool-down and LTOP Transient Conditions Using PROFAS-RV PFM Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong-Min; Lee, Bong-Sang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The round robin project was proposed by the PFM Research Subcommittee of the Japan Welding Engineering Society to Asian Society for Integrity of Nuclear Components (ASINCO) members, which is designated in Korea as Phase 2 of A-Pro2. The objective of this phase 2 of RR analysis is to compare the scheme and results related to the assessment of structural integrity of RPV for the events important to safety in the design consideration but relatively low fracture probability. In this study, probabilistic fracture mechanics analysis was performed for the round robin cases using PROFAS-RV code. The effects of key parameters such as different transient, fluence level, Cu and Ni content, initial RT{sub NDT} and RT{sub NDT} shift model on the failure probability were systematically compared and reviewed. These efforts can minimize the uncertainty of the integrity evaluation for the reactor pressure vessel.

  18. Formulating informative, data-based priors for failure probability estimation in reliability analysis

    International Nuclear Information System (INIS)

    Guikema, Seth D.

    2007-01-01

    Priors play an important role in the use of Bayesian methods in risk analysis, and using all available information to formulate an informative prior can lead to more accurate posterior inferences. This paper examines the practical implications of using five different methods for formulating an informative prior for a failure probability based on past data. These methods are the method of moments, maximum likelihood (ML) estimation, maximum entropy estimation, starting from a non-informative 'pre-prior', and fitting a prior based on confidence/credible interval matching. The priors resulting from the use of these different methods are compared qualitatively, and the posteriors are compared quantitatively based on a number of different scenarios of observed data used to update the priors. The results show that the amount of information assumed in the prior makes a critical difference in the accuracy of the posterior inferences. For situations in which the data used to formulate the informative prior is an accurate reflection of the data that is later observed, the ML approach yields the minimum variance posterior. However, the maximum entropy approach is more robust to differences between the data used to formulate the prior and the observed data because it maximizes the uncertainty in the prior subject to the constraints imposed by the past data

  19. Fragility estimation for seismically isolated nuclear structures by high confidence low probability of failure values and bi-linear regression

    International Nuclear Information System (INIS)

    Carausu, A.

    1996-01-01

    A method for the fragility estimation of seismically isolated nuclear power plant structure is proposed. The relationship between the ground motion intensity parameter (e.g. peak ground velocity or peak ground acceleration) and the response of isolated structures is expressed in terms of a bi-linear regression line, whose coefficients are estimated by the least-square method in terms of available data on seismic input and structural response. The notion of high confidence low probability of failure (HCLPF) value is also used for deriving compound fragility curves for coupled subsystems. (orig.)

  20. Time to failure of hierarchical load-transfer models of fracture

    DEFF Research Database (Denmark)

    Vázquez-Prada, M; Gómez, J B; Moreno, Y

    1999-01-01

    The time to failure, T, of dynamical models of fracture for a hierarchical load-transfer geometry is studied. Using a probabilistic strategy and juxtaposing hierarchical structures of height n, we devise an exact method to compute T, for structures of height n+1. Bounding T, for large n, we are a...... are able to deduce that the time to failure tends to a nonzero value when n tends to infinity. This numerical conclusion is deduced for both power law and exponential breakdown rules....

  1. Elapsed decision time affects the weighting of prior probability in a perceptual decision task

    Science.gov (United States)

    Hanks, Timothy D.; Mazurek, Mark E.; Kiani, Roozbeh; Hopp, Elizabeth; Shadlen, Michael N.

    2012-01-01

    Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (i) decisions that linger tend to arise from less reliable evidence, and (ii) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal cortex (LIP) of rhesus monkeys performing this task. PMID:21525274

  2. Cost-effectiveness analysis of timely dialysis referral after renal transplant failure in Spain

    Directory of Open Access Journals (Sweden)

    Villa Guillermo

    2012-08-01

    Full Text Available Abstract Background A cost-effectiveness analysis of timely dialysis referral after renal transplant failure was undertaken from the perspective of the Public Administration. The current Spanish situation, where all the patients undergoing graft function loss are referred back to dialysis in a late manner, was compared to an ideal scenario where all the patients are timely referred. Methods A Markov model was developed in which six health states were defined: hemodialysis, peritoneal dialysis, kidney transplantation, late referral hemodialysis, late referral peritoneal dialysis and death. The model carried out a simulation of the progression of renal disease for a hypothetical cohort of 1,000 patients aged 40, who were observed in a lifetime temporal horizon of 45 years. In depth sensitivity analyses were performed in order to ensure the robustness of the results obtained. Results Considering a discount rate of 3 %, timely referral showed an incremental cost of 211 €, compared to late referral. This cost increase was however a consequence of the incremental survival observed. The incremental effectiveness was 0.0087 quality-adjusted life years (QALY. When comparing both scenarios, an incremental cost-effectiveness ratio of 24,390 €/QALY was obtained, meaning that timely dialysis referral might be an efficient alternative if a willingness-to-pay threshold of 45,000 €/QALY is considered. This result proved to be independent of the proportion of late referral patients observed. The acceptance probability of timely referral was 61.90 %, while late referral was acceptable in 38.10 % of the simulations. If we however restrict the analysis to those situations not involving any loss of effectiveness, the acceptance probability of timely referral was 70.10 %, increasing twofold that of late referral (29.90 %. Conclusions Timely dialysis referral after graft function loss might be an efficient alternative in Spain, improving both

  3. Cost-effectiveness analysis of timely dialysis referral after renal transplant failure in Spain.

    Science.gov (United States)

    Villa, Guillermo; Sánchez-Álvarez, Emilio; Cuervo, Jesús; Fernández-Ortiz, Lucía; Rebollo, Pablo; Ortega, Francisco

    2012-08-16

    A cost-effectiveness analysis of timely dialysis referral after renal transplant failure was undertaken from the perspective of the Public Administration. The current Spanish situation, where all the patients undergoing graft function loss are referred back to dialysis in a late manner, was compared to an ideal scenario where all the patients are timely referred. A Markov model was developed in which six health states were defined: hemodialysis, peritoneal dialysis, kidney transplantation, late referral hemodialysis, late referral peritoneal dialysis and death. The model carried out a simulation of the progression of renal disease for a hypothetical cohort of 1,000 patients aged 40, who were observed in a lifetime temporal horizon of 45 years. In depth sensitivity analyses were performed in order to ensure the robustness of the results obtained. Considering a discount rate of 3 %, timely referral showed an incremental cost of 211 €, compared to late referral. This cost increase was however a consequence of the incremental survival observed. The incremental effectiveness was 0.0087 quality-adjusted life years (QALY). When comparing both scenarios, an incremental cost-effectiveness ratio of 24,390 €/QALY was obtained, meaning that timely dialysis referral might be an efficient alternative if a willingness-to-pay threshold of 45,000 €/QALY is considered. This result proved to be independent of the proportion of late referral patients observed. The acceptance probability of timely referral was 61.90 %, while late referral was acceptable in 38.10 % of the simulations. If we however restrict the analysis to those situations not involving any loss of effectiveness, the acceptance probability of timely referral was 70.10 %, increasing twofold that of late referral (29.90 %). Timely dialysis referral after graft function loss might be an efficient alternative in Spain, improving both patients' survival rates and health-related quality of life at an

  4. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  5. Time-dependent methodology for fault tree evaluation

    International Nuclear Information System (INIS)

    Vesely, W.B.

    1976-01-01

    Any fault tree may be evaluated applying the method called the kinetic theory of fault trees. The basic feature of this method as presented here is in that any information on primary failure, type failure or peak failure is derived from three characteristics: probability of existence, failure intensity and failure density. The determination of the said three characteristics for a given phenomenon yields the remaining probabilistic information on the individual aspects of the failure and on their totality for the whole observed period. The probabilistic characteristics are determined by applying the analysis of phenomenon probability. The total time dependent information on the peak failure is obtained by using the type failures (critical paths) of the fault tree. By applying the said process the total time dependent information is obtained for every primary failure and type failure of the fault tree. In the application of the method of the kinetic theory of fault trees represented by the PREP and KITT programmes, the type failures are first obtained using the deterministic testing method or using the Monte Carlo simulation (PREP programme). The respective characteristics are then determined using the kinetic theory of fault trees (KITT programmes). (Oy)

  6. Protein variants in Hiroshima and Nagasaki: tales of two cities.

    Science.gov (United States)

    Neel, J V; Satoh, C; Smouse, P; Asakawa, J; Takahashi, N; Goriki, K; Fujita, M; Kageoka, T; Hazama, R

    1988-12-01

    The results of 1,465,423 allele product determinations based on blood samples from Hiroshima and Nagasaki, involving 30 different proteins representing 32 different gene products, are analyzed in a variety of ways, with the following conclusions: (1) Sibships and their parents are included in the sample. Our analysis reveals that statistical procedures designed to reduce the sample to equivalent independent genomes do not in population comparisons compensate for the familial cluster effect of rare variants. Accordingly, the data set was reduced to one representative of each sibship (937,427 allele products). (2) Both chi 2-type contrasts and a genetic distance measure (delta) reveal that rare variants (P less than .01) are collectively as effective as polymorphisms in establishing genetic differences between the two cities. (3) We suggest that rare variants that individually exhibit significant intercity differences are probably the legacy of tribal private polymorphisms that occurred during prehistoric times. (4) Despite the great differences in the known histories of the two cities, both the overall frequency of rare variants and the number of different rare variants are essentially identical in the two cities. (5) The well-known differences in locus variability are confirmed, now after adjustment for sample size differences for the various locus products; in this large series we failed to detect variants at only three of 29 loci for which sample size exceeded 23,000. (6) The number of alleles identified per locus correlates positively with subunit molecular weight. (7) Loci supporting genetic polymorphisms are characterized by more rare variants than are loci at which polymorphisms were not encountered. (8) Loci whose products do not appear to be essential for health support more variants than do loci the absence of whose product is detrimental to health. (9) There is a striking excess of rare variants over the expectation under the neutral mutation

  7. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  8. Probability and containment of turbine missiles

    International Nuclear Information System (INIS)

    Yeh, G.C.K.

    1976-01-01

    With the trend toward ever larger power generating plants with large high-speed turbines, an important plant design consideration is the potential for and consequences of mechanical failure of turbine rotors. Such rotor failure could result in high-velocity disc fragments (turbine missiles) perforating the turbine casing and jeopardizing vital plant systems. The designer must first estimate the probability of any turbine missile damaging any safety-related plant component for his turbine and his plant arrangement. If the probability is not low enough to be acceptable to the regulatory agency, he must design a shield to contain the postulated turbine missiles. Alternatively, the shield could be designed to retard (to reduce the velocity of) the missiles such that they would not damage any vital plant system. In this paper, some of the presently available references that can be used to evaluate the probability, containment and retardation of turbine missiles are reviewed; various alternative methods are compared; and subjects for future research are recommended. (Auth.)

  9. Selection of risk reduction portfolios under interval-valued probabilities

    International Nuclear Information System (INIS)

    Toppila, Antti; Salo, Ahti

    2017-01-01

    A central problem in risk management is that of identifying the optimal combination (or portfolio) of improvements that enhance the reliability of the system most through reducing failure event probabilities, subject to the availability of resources. This optimal portfolio can be sensitive with regard to epistemic uncertainties about the failure events' probabilities. In this paper, we develop an optimization model to support the allocation of resources to improvements that mitigate risks in coherent systems in which interval-valued probabilities defined by lower and upper bounds are employed to capture epistemic uncertainties. Decision recommendations are based on portfolio dominance: a resource allocation portfolio is dominated if there exists another portfolio that improves system reliability (i) at least as much for all feasible failure probabilities and (ii) strictly more for some feasible probabilities. Based on non-dominated portfolios, recommendations about improvements to implement are derived by inspecting in how many non-dominated portfolios a given improvement is contained. We present an exact method for computing the non-dominated portfolios. We also present an approximate method that simplifies the reliability function using total order interactions so that larger problem instances can be solved with reasonable computational effort. - Highlights: • Reliability allocation under epistemic uncertainty about probabilities. • Comparison of alternatives using dominance. • Computational methods for generating the non-dominated alternatives. • Deriving decision recommendations that are robust with respect to epistemic uncertainty.

  10. Failure modes and natural control time for distributed vibrating systems

    International Nuclear Information System (INIS)

    Reid, R.M.

    1994-01-01

    The eigenstructure of the Gram matrix of frequency exponentials is used to study linear vibrating systems of hyperbolic type with distributed control. Using control norm as a practical measure of controllability and the vibrating string as a prototype, it is demonstrated that hyperbolic systems have a natural control time, even when only finitely many modes are excited. For shorter control times there are identifiable control failure modes which can be steered to zero only with very high cost in control norm. Both natural control time and the associated failure modes are constructed for linear fluids, strings, and beams, making note of the essential algorithms and Mathematica code, and displaying results graphically

  11. The Statistical Analysis of Failure Time Data

    CERN Document Server

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  12. Probabilistic analysis of ''common mode failures''

    International Nuclear Information System (INIS)

    Easterling, R.G.

    1978-01-01

    Common mode failure is a topic of considerable interest in reliability and safety analyses of nuclear reactors. Common mode failures are often discussed in terms of examples: two systems fail simultaneously due to an external event such as an earthquake; two components in redundant channels fail because of a common manufacturing defect; two systems fail because a component common to both fails; the failure of one system increases the stress on other systems and they fail. The common thread running through these is a dependence of some sort--statistical or physical--among multiple failure events. However, the nature of the dependence is not the same in all these examples. An attempt is made to model situations, such as the above examples, which have been termed ''common mode failures.'' In doing so, it is found that standard probability concepts and terms, such as statistically dependent and independent events, and conditional and unconditional probabilities, suffice. Thus, it is proposed that the term ''common mode failures'' be dropped, at least from technical discussions of these problems. A corollary is that the complementary term, ''random failures,'' should also be dropped. The mathematical model presented may not cover all situations which have been termed ''common mode failures,'' but provides insight into the difficulty of obtaining estimates of the probabilities of these events

  13. Molecular Evolution of the VP1 Gene in Human Norovirus GII.4 Variants in 1974–2015

    Directory of Open Access Journals (Sweden)

    Takumi Motoya

    2017-12-01

    Full Text Available Human norovirus (HuNoV is a leading cause of viral gastroenteritis worldwide, of which GII.4 is the most predominant genotype. Unlike other genotypes, GII.4 has created various variants that escaped from previously acquired immunity of the host and caused repeated epidemics. However, the molecular evolutionary differences among all GII.4 variants, including recently discovered strains, have not been elucidated. Thus, we conducted a series of bioinformatic analyses using numerous, globally collected, full-length GII.4 major capsid (VP1 gene sequences (466 strains to compare the evolutionary patterns among GII.4 variants. The time-scaled phylogenetic tree constructed using the Bayesian Markov chain Monte Carlo (MCMC method showed that the common ancestor of the GII.4 VP1 gene diverged from GII.20 in 1840. The GII.4 genotype emerged in 1932, and then formed seven clusters including 14 known variants after 1980. The evolutionary rate of GII.4 strains was estimated to be 7.68 × 10−3 substitutions/site/year. The evolutionary rates probably differed among variants as well as domains [protruding 1 (P1, shell, and P2 domains]. The Osaka 2007 variant strains probably contained more nucleotide substitutions than any other variant. Few conformational epitopes were located in the shell and P1 domains, although most were contained in the P2 domain, which, as previously established, is associated with attachment to host factors and antigenicity. We found that positive selection sites for the whole GII.4 genotype existed in the shell and P1 domains, while Den Haag 2006b, New Orleans 2009, and Sydney 2012 variants were under positive selection in the P2 domain. Amino acid substitutions overlapped with putative epitopes or were located around the epitopes in the P2 domain. The effective population sizes of the present strains increased stepwise for Den Haag 2006b, New Orleans 2009, and Sydney 2012 variants. These results suggest that HuNoV GII.4 rapidly

  14. Accelerated failure time regression for backward recurrence times and current durations

    DEFF Research Database (Denmark)

    Keiding, N; Fine, J P; Hansen, O H

    2011-01-01

    Backward recurrence times in stationary renewal processes and current durations in dynamic populations observed at a cross-section may yield estimates of underlying interarrival times or survival distributions under suitable stationarity assumptions. Regression models have been proposed for these......Backward recurrence times in stationary renewal processes and current durations in dynamic populations observed at a cross-section may yield estimates of underlying interarrival times or survival distributions under suitable stationarity assumptions. Regression models have been proposed...... for these situations, but accelerated failure time models have the particularly attractive feature that they are preserved when going from the backward recurrence times to the underlying survival distribution of interest. This simple fact has recently been noticed in a sociological context and is here illustrated...... by a study of current duration of time to pregnancy...

  15. Calculation of the tunneling time using the extended probability of the quantum histories approach

    International Nuclear Information System (INIS)

    Rewrujirek, Jiravatt; Hutem, Artit; Boonchui, Sutee

    2014-01-01

    The dwell time of quantum tunneling has been derived by Steinberg (1995) [7] as a function of the relation between transmission and reflection times τ t and τ r , weighted by the transmissivity and the reflectivity. In this paper, we reexamine the dwell time using the extended probability approach. The dwell time is calculated as the weighted average of three mutually exclusive events. We consider also the scattering process due to a resonance potential in the long-time limit. The results show that the dwell time can be expressed as the weighted sum of transmission, reflection and internal probabilities.

  16. M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey

    Science.gov (United States)

    Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.

    2016-01-01

    We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.

  17. Calculation of fuel pin failure timing under LOCA conditions

    International Nuclear Information System (INIS)

    Jones, K.R.; Wade, N.L.; Siefken, L.J.; Straka, M.; Katsma, K.R.

    1991-10-01

    The objective of this research was to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B ampersand W) design (Oconee) and a Westinghouse (W) 4-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin burnup, axial peaking factor, break size, emergency core cooling system (ECCS) availability, and main coolant pump trip on these items. The analysis was performed using a four-code approach, comprised of FRAPCON-2, SCDAP/RELAP5/MOD3, TRAC-PF1/MOD1, and FRAP-T6. In addition to the calculation of timing results, this analysis provided a comparison of the capabilities of SCDAP/RELAP5/MOD3 with TRAC-PF1/MOD1 for large-break LOCA analysis. This paper discusses the methodology employed and the code development efforts required to implement the methodology. The shortest time intervals calculated between initiation of containment isolation and fuel pin failure were 11.4 s and 19.1 for the B ampersand W and W plants, respectively. The FRAP-T6 fuel pin failure times calculated using thermal-hydraulic data generated by SCDAP/RELAP5/MOD3 were more conservative than those calculated using data generated by TRAC-PF1/MOD1. 18 refs., 7 figs., 4 tabs

  18. Estimates o the risks associated with dam failure

    Energy Technology Data Exchange (ETDEWEB)

    Ayyaswamy, P.; Hauss, B.; Hseih, T.; Moscati, A.; Hicks, T.E.; Okrent, D.

    1974-03-01

    The probabilities and potential consequences of dam failure in California, primarily due to large earthquakes, was estimated, taking as examples eleven dams having a relatively large population downstream. Mortalities in the event of dam failure range from 11,000 to 260,000, while damage to property may be as high as $720 million. It was assumed that an intensity IX or X earthquake (on the Modified Mercalli Scale) would be sufficient to completely fail earthen dams. Predictions of dam failure were based on the recurrence times of such earthquakes. For the dams studied, the recurrence intervals for an intensity IX earthquake varied between 20 and 800 years; for an intensity X between 50 and 30,000 years. For the Lake Chabot and San Pablo dams (respectively 20, 30 years recurrent earthquake times for a intensity X) the associated consequences are: 34,000 (Lake Chabot) and 30,000 (San Pablo) people killed; damage $140 million and $77 million. Evaculation was found to ameliorate the consequences slightly in most cases because of the short time available. Calculations are based on demography, and assume 10 foot floodwaters will drown all in their path and destroy all one-unit homes in the flood area. Damage estimates reflect losses incurred by structural damage to buildings and do not include loss of income. Hence the economic impact is probably understated.

  19. Reliability and Availability Analysis of Some Systems with Common-Cause Failures Using SPICE Circuit Simulation Program

    Directory of Open Access Journals (Sweden)

    Muhammad Taher Abuelma'atti

    1999-01-01

    Full Text Available The effectiveness of SPICE circuit simulation program in calculating probabilities, reliability, steady-state availability and mean-time to failure of repairable systems described by Markov models is demonstrated. Two examples are presented. The first example is a warm standby system with common-cause failures and human errors. The second example is a non-identical unit parallel system with common-cause failures. In both cases recourse to numerical solution is inevitable to obtain the Laplace transforms of the probabilities. Results obtained using SPICE are compared with previously published results obtained using the Laplace transform method. Full SPICE listings are included.

  20. Advantages and Drawbacks of Applying Periodic Time-Variant Modal Analysis to Spur Gear Dynamics

    DEFF Research Database (Denmark)

    Pedersen, Rune; Santos, Ilmar; Hede, Ivan Arthur

    2010-01-01

    to ensure sufficient accuracy of the results. The method of time-variant modal analysis is applied, and the changes in the fundamental and the parametric resonance frequencies as a function of the rotational speed of the gears, are found. By obtaining the stationary and parametric parts of the time...... of applying the methodology to wind turbine gearboxes are addressed and elucidated....

  1. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  2. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property value at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.

  3. Correlated patterns of neuropsychological and behavioral symptoms in frontal variant of Alzheimer disease and behavioral variant frontotemporal dementia: a comparative case study.

    Science.gov (United States)

    Li, Pan; Zhou, Yu-Ying; Lu, Da; Wang, Yan; Zhang, Hui-Hong

    2016-05-01

    Although the neuropathologic changes and diagnostic criteria for the neurodegenerative disorder Alzheimer's disease (AD) are well-established, the clinical symptoms vary largely. Symptomatically, frontal variant of AD (fv-AD) presents very similarly to behavioral variant frontotemporal dementia (bvFTD), which creates major challenges for differential diagnosis. Here, we report two patients who present with progressive cognitive impairment, early and prominent behavioral features, and significant frontotemporal lobe atrophy on magnetic resonance imaging, consistent with an initial diagnosis of probable bvFTD. However, multimodal functional neuroimaging revealed neuropathological data consistent with a diagnosis of probable AD for one patient (pathology distributed in the frontal lobes) and a diagnosis of probable bvFTD for the other patient (hypometabolism in the bilateral frontal lobes). In addition, the fv-AD patient presented with greater executive impairment and milder behavioral symptoms relative to the bvFTD patient. These cases highlight that recognition of these atypical syndromes using detailed neuropsychological tests, biomarkers, and multimodal neuroimaging will lead to greater accuracy in diagnosis and patient management.

  4. [Hazard function and life table: an introduction to the failure time analysis].

    Science.gov (United States)

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  5. Bounds for the time to failure of hierarchical systems of fracture

    DEFF Research Database (Denmark)

    Gómez, J.B.; Vázquez-Prada, M.; Moreno, Y.

    1999-01-01

    an exact algebraic iterative method to compute the successive time intervals for individual breaking in systems of height n in terms of the information calculated in the previous height n - 1. As a byproduct of this method, rigorous lower and higher bounds for the time to failure of very large systems......For years limited Monte Carlo simulations have led to the suspicion that the time to failure of hierarchically organized load-transfer models of fracture is nonzero for sets of infinite size. This fact could have profound significance in engineering practice and also in geophysics. Here, we develop...

  6. The mathematics of games an introduction to probability

    CERN Document Server

    Taylor, David G

    2014-01-01

    Dice, Coins, and Candy Introduction ProbabilityCandy (Yum)! Wheels and More Dice RouletteCrapsCounting the Pokers Cards and CountingSeven Card Pokers Texas Hold'Em BluffingWindmills and Black Jacks? Blackjack Blackjack VariantsMore Fun Dice!Liar's Dice Yahtzee Zombie Dice Board Games, Not ""Bored"" Games Board Game Movement Pay Day (The Board Game) MonopolySpread, RevisitedCan You Bet and Win? Betting SystemsGambler's RuinThere Are More Games! The LotteryBingo Baccarat Farkle BackgammonMemoryAppendices A Probabilities with Infinity B St. Petersburg Paradox C Prisoner's Dilemma and More Game Th

  7. Exploring Continuity of Care in Patients with Alcohol Use Disorders Using Time-Variant Measures

    NARCIS (Netherlands)

    S.C. de Vries (Sjoerd); A.I. Wierdsma (André)

    2008-01-01

    textabstractBackground/Aims: We used time-variant measures of continuity of care to study fluctuations in long-term treatment use by patients with alcohol-related disorders. Methods: Data on service use were extracted from the Psychiatric Case Register for the Rotterdam Region, The Netherlands.

  8. Limited test data: The choice between confidence limits and inverse probability

    International Nuclear Information System (INIS)

    Nichols, P.

    1975-01-01

    For a unit which has been successfully designed to a high standard of reliability, any test programme of reasonable size will result in only a small number of failures. In these circumstances the failure rate estimated from the tests will depend on the statistical treatment applied. When a large number of units is to be manufactured, an unexpected high failure rate will certainly result in a large number of failures, so it is necessary to guard against optimistic unrepresentative test results by using a confidence limit approach. If only a small number of production units is involved, failures may not occur even with a higher than expected failure rate, and so one may be able to accept a method which allows for the possibility of either optimistic or pessimistic test results, and in this case an inverse probability approach, based on Bayes' theorem, might be used. The paper first draws attention to an apparently significant difference in the numerical results from the two methods, particularly for the overall probability of several units arranged in redundant logic. It then discusses a possible objection to the inverse method, followed by a demonstration that, for a large population and a very reasonable choice of prior probability, the inverse probability and confidence limit methods give the same numerical result. Finally, it is argued that a confidence limit approach is overpessimistic when a small number of production units is involved, and that both methods give the same answer for a large population. (author)

  9. Reversion in variants from a duplication strain of Aspergillus nidulans

    International Nuclear Information System (INIS)

    Menezes, E.M.; Azevedo, J.L.

    1978-01-01

    Strains of Aspergillus nidulans with a chromosome segment in duplicate, one in normal position and one translocated to another chromosome, are unstable at mitosis. In addition to variants which result from deletions in either of the duplicate segments, which usually have improved morphology, they produce variants with deteriorated morphology. Three deteriorated variants reverted frequently to parental type morphology, both spontaneously and after ultra-violet treatment. Of six reversions analysed genetically, five were due to suppressors and one was probably due to back mutation. The suppressors segregated as single genes and were not linked to the mutation which they suppress. The instability of these so-called 'deteriorated' variants is discussed in relation to mitotic instability phenomena in A. nidulans. (orig.) [de

  10. Analysis of Hepatitis C Virus Genotype 1b Resistance Variants in Japanese Patients Treated with Paritaprevir-Ritonavir and Ombitasvir.

    Science.gov (United States)

    Krishnan, Preethi; Schnell, Gretja; Tripathi, Rakesh; Beyer, Jill; Reisch, Thomas; Zhang, Xinyan; Setze, Carolyn; Rodrigues, Lino; Burroughs, Margaret; Redman, Rebecca; Chayama, Kazuaki; Kumada, Hiromitsu; Collins, Christine; Pilot-Matias, Tami

    2016-02-01

    Treatment of HCV genotype 1b (GT1b)-infected Japanese patients with paritaprevir (NS3/4A inhibitor boosted with ritonavir) and ombitasvir (NS5A inhibitor) in studies M12-536 and GIFT-I demonstrated high sustained virologic response (SVR) rates. The virologic failure rate was 3% (13/436) across the two studies. Analyses were conducted to evaluate the impact of baseline resistance-associated variants (RAVs) on treatment outcome and the emergence and persistence of RAVs in patients experiencing virologic failure. Baseline paritaprevir resistance-conferring variants in NS3 were infrequent, while Y93H in NS5A was the most prevalent ombitasvir resistance-conferring variant at baseline. A comparison of baseline prevalence of polymorphisms in Japanese and western patients showed that Q80L and S122G in NS3 and L28M, R30Q, and Y93H in NS5A were significantly more prevalent in Japanese patients. In the GIFT-I study, the prevalence of Y93H in NS5A varied between 13% and 21% depending on the deep-sequencing detection threshold. Among patients with Y93H comprising 40% of their preexisting viral population, the 24-week SVR (SVR24) rates were >99% (276/277), 93% (38/41), and 76% (25/33), respectively, indicating that the prevalence of Y93H within a patient's viral population is a good predictor of treatment response. The predominant RAVs at the time of virologic failure were D168A/V in NS3 and Y93H alone or in combination with other variants in NS5A. While levels of NS3 RAVs declined over time, NS5A RAVs persisted through posttreatment week 48. Results from these analyses are informative in understanding the resistance profile of an ombitasvir- plus paritaprevir/ritonavir-based regimen in Japanese GT1b-infected patients. Copyright © 2016 Krishnan et al.

  11. Hydraulic mechanism and time-dependent characteristics of loose gully deposits failure induced by rainfall

    Directory of Open Access Journals (Sweden)

    Yong Wu

    2015-12-01

    Full Text Available Failure of loose gully deposits under the effect of rainfall contributes to the potential risk of debris flow. In the past decades, researches on hydraulic mechanism and time-dependent characteristics of loose deposits failure are frequently reported, however adequate measures for reducing debris flow are not available practically. In this context, a time-dependent model was established to determine the changes of water table of loose deposits using hydraulic and topographic theories. In addition, the variation in water table with elapsed time was analyzed. The formulas for calculating hydrodynamic and hydrostatic pressures on each strip and block unit of deposit were proposed, and the slope stability and failure risk of the loose deposits were assessed based on the time-dependent hydraulic characteristics of established model. Finally, the failure mechanism of deposits based on infinite slope theory was illustrated, with an example, to calculate sliding force, anti-sliding force and residual sliding force applied to each slice. The results indicate that failure of gully deposits under the effect of rainfall is the result of continuously increasing hydraulic pressure and water table. The time-dependent characteristics of loose deposit failure are determined by the factors of hydraulic properties, drainage area of interest, rainfall pattern, rainfall duration and intensity.

  12. Modeling and real time simulation of an HVDC inverter feeding a weak AC system based on commutation failure study.

    Science.gov (United States)

    Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad

    2018-06-01

    This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Neoadjuvant chemotherapy prior to radical cystectomy for muscle-invasive bladder cancer with variant histology.

    Science.gov (United States)

    Vetterlein, Malte W; Wankowicz, Stephanie A M; Seisen, Thomas; Lander, Richard; Löppenberg, Björn; Chun, Felix K-H; Menon, Mani; Sun, Maxine; Barletta, Justine A; Choueiri, Toni K; Bellmunt, Joaquim; Trinh, Quoc-Dien; Preston, Mark A

    2017-11-15

    Neoadjuvant chemotherapy in pure urothelial bladder cancer provides a significant survival benefit. However, to the authors' knowledge, it is unknown whether this benefit persists in histological variants. The objective of the current study was to assess the effect of neoadjuvant chemotherapy on the probability of non-organ-confined disease and overall survival after radical cystectomy (RC) in patients with histological variants. Querying the National Cancer Data Base, the authors identified 2018 patients with histological variants who were undergoing RC for bladder cancer between 2003 and 2012. Variants were categorized as micropapillary or sarcomatoid differentiation, squamous cell carcinoma, adenocarcinoma, neuroendocrine tumors, and other histology. Logistic regression models estimated the odds of non-organ-confined disease at the time of RC for each histological variant, stratified by the receipt of neoadjuvant chemotherapy. Cox regression models were used to examine the effect of neoadjuvant chemotherapy on overall mortality in each variant subgroup. Patients with neuroendocrine tumors (odds ratio [OR], 0.16; 95% confidence interval [95% CI], 0.08-0.32 [Pchemotherapy. An overall survival benefit for neoadjuvant chemotherapy was only found in patients with neuroendocrine tumors (hazard ratio, 0.49; 95% CI, 0.33-0.74 [P=.001]). Patients with neuroendocrine tumors benefit from neoadjuvant chemotherapy, as evidenced by better overall survival and lower rates of non-organ-confined disease at the time of RC. For tumors with micropapillary differentiation, sarcomatoid differentiation, or adenocarcinoma, neoadjuvant chemotherapy decreased the frequency of non-organ-confined disease at the time of RC. However, this favorable effect did not translate into a statistically significant overall survival benefit for these patients, potentially due to the aggressive tumor biology. Cancer 2017;123:4346-55. © 2017 American Cancer Society. © 2017 American Cancer Society.

  14. Estimating the probability of rare events: addressing zero failure data.

    Science.gov (United States)

    Quigley, John; Revie, Matthew

    2011-07-01

    Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data-dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with 1/2.5n, where n is the number of trials. © 2011 Society for Risk Analysis.

  15. Predicting kidney graft failure using time-dependent renal function covariates

    NARCIS (Netherlands)

    de Bruijne, Mattheus H. J.; Sijpkens, Yvo W. J.; Paul, Leendert C.; Westendorp, Rudi G. J.; van Houwelingen, Hans C.; Zwinderman, Aeilko H.

    2003-01-01

    Chronic rejection and recurrent disease are the major causes of late graft failure in renal transplantation. To assess outcome, most researchers use Cox proportional hazard analysis with time-fixed covariates. We developed a model adding time-dependent renal function covariates to improve the

  16. First-passage Probability Estimation of an Earthquake Response of Seismically Isolated Containment Buildings

    International Nuclear Information System (INIS)

    Hahm, Dae-Gi; Park, Kwan-Soon; Koh, Hyun-Moo

    2008-01-01

    The awareness of a seismic hazard and risk is being increased rapidly according to the frequent occurrences of the huge earthquakes such as the 2008 Sichuan earthquake which caused about 70,000 confirmed casualties and a 20 billion U.S. dollars economic loss. Since an earthquake load contains various uncertainties naturally, the safety of a structural system under an earthquake excitation has been assessed by probabilistic approaches. In many structural applications for a probabilistic safety assessment, it is often regarded that the failure of a system will occur when the response of the structure firstly crosses the limit barrier within a specified interval of time. The determination of such a failure probability is usually called the 'first-passage problem' and has been extensively studied during the last few decades. However, especially for the structures which show a significant nonlinear dynamic behavior, an effective and accurate method for the estimation of such a failure probability is not fully established yet. In this study, we presented a new approach to evaluate the first-passage probability of an earthquake response of seismically isolated structures. The proposed method is applied to the seismic isolation system for the containment buildings of a nuclear power plant. From the numerical example, we verified that the proposed method shows accurate results with more efficient computational efforts compared to the conventional approaches

  17. Somatic cancer variant curation and harmonization through consensus minimum variant level data

    Directory of Open Access Journals (Sweden)

    Deborah I. Ritter

    2016-11-01

    Full Text Available Abstract Background To truly achieve personalized medicine in oncology, it is critical to catalog and curate cancer sequence variants for their clinical relevance. The Somatic Working Group (WG of the Clinical Genome Resource (ClinGen, in cooperation with ClinVar and multiple cancer variant curation stakeholders, has developed a consensus set of minimal variant level data (MVLD. MVLD is a framework of standardized data elements to curate cancer variants for clinical utility. With implementation of MVLD standards, and in a working partnership with ClinVar, we aim to streamline the somatic variant curation efforts in the community and reduce redundancy and time burden for the interpretation of cancer variants in clinical practice. Methods We developed MVLD through a consensus approach by i reviewing clinical actionability interpretations from institutions participating in the WG, ii conducting extensive literature search of clinical somatic interpretation schemas, and iii survey of cancer variant web portals. A forthcoming guideline on cancer variant interpretation, from the Association of Molecular Pathology (AMP, can be incorporated into MVLD. Results Along with harmonizing standardized terminology for allele interpretive and descriptive fields that are collected by many databases, the MVLD includes unique fields for cancer variants such as Biomarker Class, Therapeutic Context and Effect. In addition, MVLD includes recommendations for controlled semantics and ontologies. The Somatic WG is collaborating with ClinVar to evaluate MVLD use for somatic variant submissions. ClinVar is an open and centralized repository where sequencing laboratories can report summary-level variant data with clinical significance, and ClinVar accepts cancer variant data. Conclusions We expect the use of the MVLD to streamline clinical interpretation of cancer variants, enhance interoperability among multiple redundant curation efforts, and increase submission of

  18. Probability of inadvertent operation of electrical components in harsh environments

    International Nuclear Information System (INIS)

    Knoll, A.

    1989-01-01

    Harsh environment, which means humidity and high temperature, may and will affect unsealed electrical components by causing leakage ground currents in ungrounded direct current systems. The concern in a nuclear power plant is that such harsh environment conditions could cause inadvertent operation of normally deenergized components, which may have a safety-related isolation function. Harsh environment is a common cause failure, and one way to approach the problem is to assume that all the unsealed electrical components will simultaneously and inadvertently energize as a result of the environmental common cause failure. This assumption is unrealistically conservative. Test results indicated that insulating resistences of any terminal block in harsh environments have a random distribution in the range of 1 to 270 kΩ, with a mean value ∼59 kΩ. The objective of this paper is to evaluate a realistic conditional failure probability for inadvertent operation of electrical components in harsh environments. This value will be used thereafter in probabilistic safety evaluations of harsh environment events and will replace both the overconservative common cause probability of 1 and the random failure probability used for mild environments

  19. A real-time expert system for nuclear power plant failure diagnosis and operational guide

    International Nuclear Information System (INIS)

    Naito, N.; Sakuma, A.; Shigeno, K.; Mori, N.

    1987-01-01

    A real-time expert system (DIAREX) has been developed to diagnose plant failure and to offer a corrective operational guide for boiling water reactor (BWR) power plants. The failure diagnosis model used in DIAREX was systematically developed, based mainly on deep knowledge, to cover heuristics. Complex paradigms for knowledge representation were adopted, i.e., the process representation language and the failure propagation tree. The system is composed of a knowledge base, knowledge base editor, preprocessor, diagnosis processor, and display processor. The DIAREX simulation test has been carried out for many transient scenarios, including multiple failures, using a real-time full-scope simulator modeled after the 1100-MW(electric) BWR power plant. Test results showed that DIAREX was capable of diagnosing a plant failure quickly and of providing a corrective operational guide with a response time fast enough to offer valuable information to plant operators

  20. Analysis of failure dependent test, repair and shutdown strategies for redundant trains

    International Nuclear Information System (INIS)

    Uryasev, S.; Samanta, P.

    1994-09-01

    Failure-dependent testing implies a test of a redundant components (or trains) when failure of one component has been detected. The purpose of such testing is to detect any common cause failures (CCFs) of multiple components so that a corrective action such as repair or plant shutdown can be taken to reduce the residence time of multiple failures, given a failure has been detected. This type of testing focuses on reducing the conditional risk of CCFs. Formulas for calculating the conditional failure probability of a two train system with different test, repair and shutdown strategies are developed. A methodology is presented with an example calculation showing the risk-effectiveness of failure-dependent strategies for emergency diesel generators (EDGs) in nuclear power plants (NPPs)

  1. average probability of failure on demand estimation for burner

    African Journals Online (AJOL)

    HOD

    Pij – Probability from state i to j. 1. INTRODUCTION. In the process .... the numerical value of the PFD as result of components, sub-system ... ignored in probabilistic risk assessment it may lead to ...... Markov chains for a holistic modeling of SIS.

  2. Multifractals embedded in short time series: An unbiased estimation of probability moment

    Science.gov (United States)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  3. Late onset Pompe disease- new genetic variant: Case report ...

    African Journals Online (AJOL)

    The patient was not given enzyme replacement therapy due to cost but received high protein therapy and Oxygen supplementation using Oxygen extractor machine. She is worsening due to respiratory failure. Conclusion: This is a new genetic variant isolated of late-onset Pompe disease which presents with almost pure ...

  4. The probability of containment failure by direct containment heating in surry

    International Nuclear Information System (INIS)

    Pilch, M.M.; Allen, M.D.; Bergeron, K.D.; Tadios, E.L.; Stamps, D.W.; Spencer, B.W.; Quick, K.S.; Knudson, D.L.

    1995-05-01

    In a light-water reactor core melt accident, if the reactor pressure vessel (RPV) fails while the reactor coolant system (RCS) at high pressure, the expulsion of molten core debris may pressurize the reactor containment building (RCB) beyond its failure pressure. A failure in the bottom head of the RPV, followed by melt expulsion and blowdown of the RCS, will entrain molten core debris in the high-velocity steam blowdown gas. This chain of events is called a high-pressure melt ejection (HPME). Four mechanisms may cause a rapid increase in pressure and temperature in the reactor containment: (1) blowdown of the RCS, (2) efficient debris-to-gas heat transfer, (3) exothermic metal-steam and metal-oxygen reactions, and (4) hydrogen combustion. These processes, which lead to increased loads on the containment building, are collectively referred to as direct containment heating (DCH). It is necessary to understand factors that enhance or mitigate DCH because the pressure load imposed on the RCB may lead to early failure of the containment

  5. Analysis of immune-related loci identifies 48 new susceptibility variants for multiple sclerosis

    Science.gov (United States)

    Beecham, Ashley H; Patsopoulos, Nikolaos A; Xifara, Dionysia K; Davis, Mary F; Kemppinen, Anu; Cotsapas, Chris; Shahi, Tejas S; Spencer, Chris; Booth, David; Goris, An; Oturai, Annette; Saarela, Janna; Fontaine, Bertrand; Hemmer, Bernhard; Martin, Claes; Zipp, Frauke; D’alfonso, Sandra; Martinelli-Boneschi, Filippo; Taylor, Bruce; Harbo, Hanne F; Kockum, Ingrid; Hillert, Jan; Olsson, Tomas; Ban, Maria; Oksenberg, Jorge R; Hintzen, Rogier; Barcellos, Lisa F; Agliardi, Cristina; Alfredsson, Lars; Alizadeh, Mehdi; Anderson, Carl; Andrews, Robert; Søndergaard, Helle Bach; Baker, Amie; Band, Gavin; Baranzini, Sergio E; Barizzone, Nadia; Barrett, Jeffrey; Bellenguez, Céline; Bergamaschi, Laura; Bernardinelli, Luisa; Berthele, Achim; Biberacher, Viola; Binder, Thomas M C; Blackburn, Hannah; Bomfim, Izaura L; Brambilla, Paola; Broadley, Simon; Brochet, Bruno; Brundin, Lou; Buck, Dorothea; Butzkueven, Helmut; Caillier, Stacy J; Camu, William; Carpentier, Wassila; Cavalla, Paola; Celius, Elisabeth G; Coman, Irène; Comi, Giancarlo; Corrado, Lucia; Cosemans, Leentje; Cournu-Rebeix, Isabelle; Cree, Bruce A C; Cusi, Daniele; Damotte, Vincent; Defer, Gilles; Delgado, Silvia R; Deloukas, Panos; di Sapio, Alessia; Dilthey, Alexander T; Donnelly, Peter; Dubois, Bénédicte; Duddy, Martin; Edkins, Sarah; Elovaara, Irina; Esposito, Federica; Evangelou, Nikos; Fiddes, Barnaby; Field, Judith; Franke, Andre; Freeman, Colin; Frohlich, Irene Y; Galimberti, Daniela; Gieger, Christian; Gourraud, Pierre-Antoine; Graetz, Christiane; Graham, Andrew; Grummel, Verena; Guaschino, Clara; Hadjixenofontos, Athena; Hakonarson, Hakon; Halfpenny, Christopher; Hall, Gillian; Hall, Per; Hamsten, Anders; Harley, James; Harrower, Timothy; Hawkins, Clive; Hellenthal, Garrett; Hillier, Charles; Hobart, Jeremy; Hoshi, Muni; Hunt, Sarah E; Jagodic, Maja; Jelčić, Ilijas; Jochim, Angela; Kendall, Brian; Kermode, Allan; Kilpatrick, Trevor; Koivisto, Keijo; Konidari, Ioanna; Korn, Thomas; Kronsbein, Helena; Langford, Cordelia; Larsson, Malin; Lathrop, Mark; Lebrun-Frenay, Christine; Lechner-Scott, Jeannette; Lee, Michelle H; Leone, Maurizio A; Leppä, Virpi; Liberatore, Giuseppe; Lie, Benedicte A; Lill, Christina M; Lindén, Magdalena; Link, Jenny; Luessi, Felix; Lycke, Jan; Macciardi, Fabio; Männistö, Satu; Manrique, Clara P; Martin, Roland; Martinelli, Vittorio; Mason, Deborah; Mazibrada, Gordon; McCabe, Cristin; Mero, Inger-Lise; Mescheriakova, Julia; Moutsianas, Loukas; Myhr, Kjell-Morten; Nagels, Guy; Nicholas, Richard; Nilsson, Petra; Piehl, Fredrik; Pirinen, Matti; Price, Siân E; Quach, Hong; Reunanen, Mauri; Robberecht, Wim; Robertson, Neil P; Rodegher, Mariaemma; Rog, David; Salvetti, Marco; Schnetz-Boutaud, Nathalie C; Sellebjerg, Finn; Selter, Rebecca C; Schaefer, Catherine; Shaunak, Sandip; Shen, Ling; Shields, Simon; Siffrin, Volker; Slee, Mark; Sorensen, Per Soelberg; Sorosina, Melissa; Sospedra, Mireia; Spurkland, Anne; Strange, Amy; Sundqvist, Emilie; Thijs, Vincent; Thorpe, John; Ticca, Anna; Tienari, Pentti; van Duijn, Cornelia; Visser, Elizabeth M; Vucic, Steve; Westerlind, Helga; Wiley, James S; Wilkins, Alastair; Wilson, James F; Winkelmann, Juliane; Zajicek, John; Zindler, Eva; Haines, Jonathan L; Pericak-Vance, Margaret A; Ivinson, Adrian J; Stewart, Graeme; Hafler, David; Hauser, Stephen L; Compston, Alastair; McVean, Gil; De Jager, Philip; Sawcer, Stephen; McCauley, Jacob L

    2013-01-01

    Using the ImmunoChip custom genotyping array, we analysed 14,498 multiple sclerosis subjects and 24,091 healthy controls for 161,311 autosomal variants and identified 135 potentially associated regions (p-value multiple sclerosis subjects and 26,703 healthy controls. In these 80,094 individuals of European ancestry we identified 48 new susceptibility variants (p-value multiple sclerosis risk variants in 103 discrete loci outside of the Major Histocompatibility Complex. With high resolution Bayesian fine-mapping, we identified five regions where one variant accounted for more than 50% of the posterior probability of association. This study enhances the catalogue of multiple sclerosis risk variants and illustrates the value of fine-mapping in the resolution of GWAS signals. PMID:24076602

  6. Solutions to time variant problems of real-time expert systems

    Science.gov (United States)

    Yeh, Show-Way; Wu, Chuan-Lin; Hung, Chaw-Kwei

    1988-01-01

    Real-time expert systems for monitoring and control are driven by input data which changes with time. One of the subtle problems of this field is the propagation of time variant problems from rule to rule. This propagation problem is even complicated under a multiprogramming environment where the expert system may issue test commands to the system to get data and to access time consuming devices to retrieve data for concurrent reasoning. Two approaches are used to handle the flood of input data. Snapshots can be taken to freeze the system from time to time. The expert system treats the system as a stationary one and traces changes by comparing consecutive snapshots. In the other approach, when an input is available, the rules associated with it are evaluated. For both approaches, if the premise condition of a fired rule is changed to being false, the downstream rules should be deactivated. If the status change is due to disappearance of a transient problem, actions taken by the fired downstream rules which are no longer true may need to be undone. If a downstream rule is being evaluated, it should not be fired. Three mechanisms for solving this problem are discussed: tracing, backward checking, and censor setting. In the forward tracing mechanism, when the premise conditions of a fired rule become false, the premise conditions of downstream rules which have been fired or are being evaluated due to the firing of that rule are reevaluated. A tree with its root at the rule being deactivated is traversed. In the backward checking mechanism, when a rule is being fired, the expert system checks back on the premise conditions of the upstream rules that result in evaluation of the rule to see whether it should be fired. The root of the tree being traversed is the rule being fired. In the censor setting mechanism, when a rule is to be evaluated, a censor is constructed based on the premise conditions of the upstream rules and the censor is evaluated just before the rule is

  7. Hairy cell leukemia-variant

    International Nuclear Information System (INIS)

    Quadri, Mohammad I.; Al-Sheikh, Iman H.

    2001-01-01

    Hairy cell leukaemia variant is a very rare chronic lymphoproliferative disorder and is closely related to hairy cell leukemia. We hereby describe a case of hairy cell leukaemia variant for the first time in Saudi Arabia. An elderly Saudi man presented with pallor, massive splenomegaly, and moderate hepatomegaly. Hemoglobin was 7.7 g/dl, Platelets were 134 x109/l and white blood count was 140x10 9/l with 97% being abnormal lymphoid cells with cytoplasmic projections. The morphology, cytochemistry, and immunophenotype of the lymphoid cells were classical of hairy cell leukaemia variant. The bone marrow was easily aspirated and findings were consistent with hairy cell leukaemia variant. (author)

  8. Contributions of Function-Altering Variants in Genes Implicated in Pubertal Timing and Body Mass for Self-Limited Delayed Puberty.

    Science.gov (United States)

    Howard, Sasha R; Guasti, Leonardo; Poliandri, Ariel; David, Alessia; Cabrera, Claudia P; Barnes, Michael R; Wehkalampi, Karoliina; O'Rahilly, Stephen; Aiken, Catherine E; Coll, Anthony P; Ma, Marcella; Rimmington, Debra; Yeo, Giles S H; Dunkel, Leo

    2018-02-01

    Self-limited delayed puberty (DP) is often associated with a delay in physical maturation, but although highly heritable the causal genetic factors remain elusive. Genome-wide association studies of the timing of puberty have identified multiple loci for age at menarche in females and voice break in males, particularly in pathways controlling energy balance. We sought to assess the contribution of rare variants in such genes to the phenotype of familial DP. We performed whole-exome sequencing in 67 pedigrees (125 individuals with DP and 35 unaffected controls) from our unique cohort of familial self-limited DP. Using a whole-exome sequencing filtering pipeline one candidate gene [fat mass and obesity-associated gene (FTO)] was identified. In silico, in vitro, and mouse model studies were performed to investigate the pathogenicity of FTO variants and timing of puberty in FTO+/- mice. We identified potentially pathogenic, rare variants in genes in linkage disequilibrium with genome-wide association studies of age at menarche loci in 283 genes. Of these, five genes were implicated in the control of body mass. After filtering for segregation with trait, one candidate, FTO, was retained. Two FTO variants, found in 14 affected individuals from three families, were also associated with leanness in these patients with DP. One variant (p.Leu44Val) demonstrated altered demethylation activity of the mutant protein in vitro. Fto+/- mice displayed a significantly delayed timing of pubertal onset (P puberty in the general population may contribute to the pathogenesis of self-limited DP. Copyright © 2017 Endocrine Society

  9. Probability and Confidence Trade-space (PACT) Evaluation: Accounting for Uncertainty in Sparing Assessments

    Science.gov (United States)

    Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael

    2012-01-01

    There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.

  10. Association of Adiposity Genetic Variants With Menarche Timing in 92,105 Women of European Descent

    NARCIS (Netherlands)

    Fernández-Rhodes, L.; Demerath, E.W.; Cousminer, D.L.; Tao, R.; Dreyfus, J.G.; Esko, T.; Smith, A.V.; Gudnason, V.; Harris, T.B.; Launer, L.; McArdle, P.F.; Yerges-Armstrong, L.M.; Elks, C.E.; Strachan, D.P.; Kutalik, Z.; Vollenweider, P.; Feenstra, B.; Boyd, H.A.; Metspalu, A.; Mihailov, E.; Broer, L.; Zillikens, M.C.; Oostra, B.A.; van Duijn, C.M.; Lunetta, K.L.; Perry, J.R.; Murray, A.; Koller, D.L.; Lai, D.; Corre, T.; Toniolo, D.; Albrecht, E.; Stöckl, D.; Grallert, H.; Gieger, C.; Hayward, C.; Polasek, O.; Rudan, I.; Wilson, J.F.; He, C.; Kraft, P.; Hu, F.B.; Hunter, D.J.; Hottenga, J.J.; Willemsen, G.; Boomsma, D.I.; Byrne, E.M.; Martin, N.G.; Montgomery, G.W.; Warrington, N.M.; Pennell, C.E.; Stolk, L.; Visser, J.A.; Hofman, A.; Uitterlinden, A.G.; Rivadeneira, F.; Lin, P.; Fisher, S.L.; Bierut, L.J.; Crisponi, L.; Porcu, E.; Mangino, M.; Zhai, G.; Spector, T.D.; Buring, J.E.; Rose, L.M.; Ridker, P.M.; Poole, C.; Hirschhorn, J.N.; Murabito, J.M.; Chasman, D.I.; Widén, E.; North, K.E.; Ong, K.K.; Franceschini, N.

    2013-01-01

    Obesity is of global health concern. There are well-described inverse relationships between female pubertal timing and obesity. Recent genome-wide association studies of age at menarche identified several obesity-related variants. Using data from the ReproGen Consortium, we employed meta-analytical

  11. Pervasive within-Mitochondrion Single-Nucleotide Variant Heteroplasmy as Revealed by Single-Mitochondrion Sequencing

    Directory of Open Access Journals (Sweden)

    Jacqueline Morris

    2017-12-01

    Full Text Available Summary: A number of mitochondrial diseases arise from single-nucleotide variant (SNV accumulation in multiple mitochondria. Here, we present a method for identification of variants present at the single-mitochondrion level in individual mouse and human neuronal cells, allowing for extremely high-resolution study of mitochondrial mutation dynamics. We identified extensive heteroplasmy between individual mitochondrion, along with three high-confidence variants in mouse and one in human that were present in multiple mitochondria across cells. The pattern of variation revealed by single-mitochondrion data shows surprisingly pervasive levels of heteroplasmy in inbred mice. Distribution of SNV loci suggests inheritance of variants across generations, resulting in Poisson jackpot lines with large SNV load. Comparison of human and mouse variants suggests that the two species might employ distinct modes of somatic segregation. Single-mitochondrion resolution revealed mitochondria mutational dynamics that we hypothesize to affect risk probabilities for mutations reaching disease thresholds. : Morris et al. use independent sequencing of multiple individual mitochondria from mouse and human brain cells to show high pervasiveness of mutations. The mutations are heteroplasmic within single mitochondria and within and between cells. These findings suggest mechanisms by which mutations accumulate over time, resulting in mitochondrial dysfunction and disease. Keywords: single mitochondrion, single cell, human neuron, mouse neuron, single-nucleotide variation

  12. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  13. Development of component failure data for seismic risk analysis

    International Nuclear Information System (INIS)

    Fray, R.R.; Moulia, T.A.

    1981-01-01

    This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)

  14. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  15. Estimating the Probability of a Rare Event Over a Finite Time Horizon

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; L'Ecuyer, Pierre; Rubino, Gerardo; Tuffin, Bruno

    2007-01-01

    We study an approximation for the zero-variance change of measure to estimate the probability of a rare event in a continuous-time Markov chain. The rare event occurs when the chain reaches a given set of states before some fixed time limit. The jump rates of the chain are expressed as functions of

  16. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    Science.gov (United States)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  17. Statistical Analysis Of Failure Strength Of Material Using Weibull Distribution

    International Nuclear Information System (INIS)

    Entin Hartini; Mike Susmikanti; Antonius Sitompul

    2008-01-01

    In evaluation of ceramic and glass materials strength a statistical approach is necessary Strength of ceramic and glass depend on its measure and size distribution of flaws in these material. The distribution of strength for ductile material is narrow and close to a Gaussian distribution while strength of brittle materials as ceramic and glass following Weibull distribution. The Weibull distribution is an indicator of the failure of material strength resulting from a distribution of flaw size. In this paper, cumulative probability of material strength to failure probability, cumulative probability of failure versus fracture stress and cumulative probability of reliability of material were calculated. Statistical criteria calculation supporting strength analysis of Silicon Nitride material were done utilizing MATLAB. (author)

  18. Congenital anatomic variants of the kidney and ureter: a pictorial essay.

    Science.gov (United States)

    Srinivas, M R; Adarsh, K M; Jeeson, Riya; Ashwini, C; Nagaraj, B R

    2016-03-01

    Congenital renal parenchymal and pelvicalyceal abnormalities have a wide spectrum. Most of them are asymptomatic, like that of ectopia, cross fused kidney, horseshoe kidney, etc., while a few of them become complicated, leading to renal failure and death. It is very important for the radiologist to identify these anatomic variants and guide the clinicians for surgical and therapeutic procedures. Cross-sectional imaging with a volume rendered technique/maximum intensity projection has overcome ultrasonography and IVU for identification and interpretation of some of these variants.

  19. Differential Expression Profile of ZFX Variants Discriminates Breast Cancer Subtypes

    Science.gov (United States)

    Pourkeramati, Fatemeh; Asadi, Malek Hossein; Shakeri, Shahryar; Farsinejad, Alireza

    2018-05-13

    ZFX is a transcriptional regulator in embryonic stem cells that plays an important role in pluripotency and self-renewal. ZFX is widely expressed in pluripotent stem cells and is down-regulated during differentiation of embryonic stem cells. ZFX has five different variants that encode three different protein isoforms. While several reports have determined the overexpression of ZFX in a variety of somatic cancers, the expression of ZFX-spliced variants in cancer cells is not well-understood. We investigated the expression of ZFX variants in a series of breast cancer tissues and cell lines using quantitative PCR. The expression of ZFX variant 1/3 was higher in tumor tissue compared to marginal tissue. In contrast, the ZFX variant 5 was down-regulated in tumor tissues. While the ZFX variant 1/3 and ZFX variant 5 expression significantly increased in low-grade tumors, ZFX variant 4 was strongly expressed in high-grade tumors and demonstrating lymphatic invasion. In addition, our result revealed a significant association between the HER2 status and the expression of ZFX-spliced variants. Our data suggest that the expression of ZFX-spliced transcripts varies between different types of breast cancer and may contribute to their tumorigenesis process. Hence, ZFX-spliced transcripts could be considered as novel tumor markers with a probable value in diagnosis, prognosis, and therapy of breast cancer.

  20. Reaction Times to Consecutive Automation Failures: A Function of Working Memory and Sustained Attention.

    Science.gov (United States)

    Jipp, Meike

    2016-12-01

    This study explored whether working memory and sustained attention influence cognitive lock-up, which is a delay in the response to consecutive automation failures. Previous research has demonstrated that the information that automation provides about failures and the time pressure that is associated with a task influence cognitive lock-up. Previous research has also demonstrated considerable variability in cognitive lock-up between participants. This is why individual differences might influence cognitive lock-up. The present study tested whether working memory-including flexibility in executive functioning-and sustained attention might be crucial in this regard. Eighty-five participants were asked to monitor automated aircraft functions. The experimental manipulation consisted of whether or not an initial automation failure was followed by a consecutive failure. Reaction times to the failures were recorded. Participants' working-memory and sustained-attention abilities were assessed with standardized tests. As expected, participants' reactions to consecutive failures were slower than their reactions to initial failures. In addition, working-memory and sustained-attention abilities enhanced the speed with which participants reacted to failures, more so with regard to consecutive than to initial failures. The findings highlight that operators with better working memory and sustained attention have small advantages when initial failures occur, but their advantages increase across consecutive failures. The results stress the need to consider personnel selection strategies to mitigate cognitive lock-up in general and training procedures to enhance the performance of low ability operators. © 2016, Human Factors and Ergonomics Society.

  1. Real time failure detection in unreinforced cementitious composites with triboluminescent sensor

    International Nuclear Information System (INIS)

    Olawale, David O.; Kliewer, Kaitlyn; Okoye, Annuli; Dickens, Tarik J.; Uddin, Mohammed J.; Okoli, Okenwa I.

    2014-01-01

    The in-situ triboluminescent optical fiber (ITOF) sensor has an integrated sensing and transmission component that converts the energy from damage events like impacts and crack propagation into optical signals that are indicative of the magnitude of damage in composite structures like concrete bridges. Utilizing the triboluminescence (TL) property of ZnS:Mn, the ITOF sensor has been successfully integrated into unreinforced cementitious composite beams to create multifunctional smart structures with in-situ failure detection capabilities. The fabricated beams were tested under flexural loading, and real time failure detection was made by monitoring the TL signals generated by the integrated ITOF sensor. Tested beam samples emitted distinctive TL signals at the instance of failure. In addition, we report herein a new and promising approach to damage characterization using TL emission profiles. Analysis of TL emission profiles indicates that the ITOF sensor responds to crack propagation through the beam even when not in contact with the crack. Scanning electron microscopy analysis indicated that fracto-triboluminescence was responsible for the TL signals observed at the instance of beam failure. -- Highlights: • Developed a new approach to triboluminescence (TL)-based sensing with ZnS:Mn. • Damage-induced excitation of ZnS:Mn enabled real time damage detection in composite. • Based on sensor position, correlation exists between TL signal and failure stress. • Introduced a new approach to damage characterization with TL profile analysis

  2. Low-abundance HIV drug-resistant viral variants in treatment-experienced persons correlate with historical antiretroviral use.

    Science.gov (United States)

    Le, Thuy; Chiarella, Jennifer; Simen, Birgitte B; Hanczaruk, Bozena; Egholm, Michael; Landry, Marie L; Dieckhaus, Kevin; Rosen, Marc I; Kozal, Michael J

    2009-06-29

    It is largely unknown how frequently low-abundance HIV drug-resistant variants at levels under limit of detection of conventional genotyping (<20% of quasi-species) are present in antiretroviral-experienced persons experiencing virologic failure. Further, the clinical implications of low-abundance drug-resistant variants at time of virologic failure are unknown. Plasma samples from 22 antiretroviral-experienced subjects collected at time of virologic failure (viral load 1380 to 304,000 copies/mL) were obtained from a specimen bank (from 2004-2007). The prevalence and profile of drug-resistant mutations were determined using Sanger sequencing and ultra-deep pyrosequencing. Genotypes were interpreted using Stanford HIV database algorithm. Antiretroviral treatment histories were obtained by chart review and correlated with drug-resistant mutations. Low-abundance drug-resistant mutations were detected in all 22 subjects by deep sequencing and only in 3 subjects by Sanger sequencing. In total they accounted for 90 of 247 mutations (36%) detected by deep sequencing; the majority of these (95%) were not detected by standard genotyping. A mean of 4 additional mutations per subject were detected by deep sequencing (p<0.0001, 95%CI: 2.85-5.53). The additional low-abundance drug-resistant mutations increased a subject's genotypic resistance to one or more antiretrovirals in 17 of 22 subjects (77%). When correlated with subjects' antiretroviral treatment histories, the additional low-abundance drug-resistant mutations correlated with the failing antiretroviral drugs in 21% subjects and correlated with historical antiretroviral use in 79% subjects (OR, 13.73; 95% CI, 2.5-74.3, p = 0.0016). Low-abundance HIV drug-resistant mutations in antiretroviral-experienced subjects at time of virologic failure can increase a subject's overall burden of resistance, yet commonly go unrecognized by conventional genotyping. The majority of unrecognized resistant mutations correlate with

  3. Low-abundance HIV drug-resistant viral variants in treatment-experienced persons correlate with historical antiretroviral use.

    Directory of Open Access Journals (Sweden)

    Thuy Le

    Full Text Available BACKGROUND: It is largely unknown how frequently low-abundance HIV drug-resistant variants at levels under limit of detection of conventional genotyping (<20% of quasi-species are present in antiretroviral-experienced persons experiencing virologic failure. Further, the clinical implications of low-abundance drug-resistant variants at time of virologic failure are unknown. METHODOLOGY/PRINCIPAL FINDINGS: Plasma samples from 22 antiretroviral-experienced subjects collected at time of virologic failure (viral load 1380 to 304,000 copies/mL were obtained from a specimen bank (from 2004-2007. The prevalence and profile of drug-resistant mutations were determined using Sanger sequencing and ultra-deep pyrosequencing. Genotypes were interpreted using Stanford HIV database algorithm. Antiretroviral treatment histories were obtained by chart review and correlated with drug-resistant mutations. Low-abundance drug-resistant mutations were detected in all 22 subjects by deep sequencing and only in 3 subjects by Sanger sequencing. In total they accounted for 90 of 247 mutations (36% detected by deep sequencing; the majority of these (95% were not detected by standard genotyping. A mean of 4 additional mutations per subject were detected by deep sequencing (p<0.0001, 95%CI: 2.85-5.53. The additional low-abundance drug-resistant mutations increased a subject's genotypic resistance to one or more antiretrovirals in 17 of 22 subjects (77%. When correlated with subjects' antiretroviral treatment histories, the additional low-abundance drug-resistant mutations correlated with the failing antiretroviral drugs in 21% subjects and correlated with historical antiretroviral use in 79% subjects (OR, 13.73; 95% CI, 2.5-74.3, p = 0.0016. CONCLUSIONS/SIGNIFICANCE: Low-abundance HIV drug-resistant mutations in antiretroviral-experienced subjects at time of virologic failure can increase a subject's overall burden of resistance, yet commonly go unrecognized by conventional

  4. Cardiac dysfunction in heart failure: the cardiologist's love affair with time.

    Science.gov (United States)

    Brutsaert, Dirk L

    2006-01-01

    Translating research into clinical practice has been a challenge throughout medical history. From the present review, it should be clear that this is particularly the case for heart failure. As a consequence, public awareness of this disease has been disillusionedly low, despite its prognosis being worse than that of most cancers and many other chronic diseases. We explore how over the past 150 years since Ludwig and Marey concepts about the evaluation of cardiac performance in patients with heart failure have emerged. From this historical-physiologic perspective, we have seen how 3 increasingly reductionist approaches or schools of thought have evolved in parallel, that is, an input-output approach, a hemodynamic pump approach, and a muscular pump approach. Each one of these has provided complementary insights into the pathophysiology of heart failure and has resulted in measurements or derived indices, some of which still being in use in present-day cardiology. From the third, most reductionist muscular pump approach, we have learned that myocardial and ventricular relaxation properties as well as temporal and spatial nonuniformities have been largely overlooked in the 2 other, input-output and hemodynamic pump, approaches. A key message from the present review is that relaxation and nonuniformities can be fully understood only from within the time-space continuum of cardiac pumping. As cyclicity and rhythm are, in some way, the most basic aspects of cardiac function, considerations of time should dominate over any measurement of cardiac performance as a muscular pump. Any measurement that is blind for the arrow of cardiac time should therefore be interpreted with caution. We have seen how the escape from the time domain-as with the calculation of LV ejection fraction-fascinating though as it may be, has undoubtedly served to hinder a rational scientific debate on the recent, so-called systolic-diastolic heart failure controversy. Lacking appreciation of early

  5. De Novo Truncating Variants in SON Cause Intellectual Disability, Congenital Malformations, and Failure to Thrive.

    Science.gov (United States)

    Tokita, Mari J; Braxton, Alicia A; Shao, Yunru; Lewis, Andrea M; Vincent, Marie; Küry, Sébastien; Besnard, Thomas; Isidor, Bertrand; Latypova, Xénia; Bézieau, Stéphane; Liu, Pengfei; Motter, Connie S; Melver, Catherine Ward; Robin, Nathaniel H; Infante, Elena M; McGuire, Marianne; El-Gharbawy, Areeg; Littlejohn, Rebecca O; McLean, Scott D; Bi, Weimin; Bacino, Carlos A; Lalani, Seema R; Scott, Daryl A; Eng, Christine M; Yang, Yaping; Schaaf, Christian P; Walkiewicz, Magdalena A

    2016-09-01

    SON is a key component of the spliceosomal complex and a critical mediator of constitutive and alternative splicing. Additionally, SON has been shown to influence cell-cycle progression, genomic integrity, and maintenance of pluripotency in stem cell populations. The clear functional relevance of SON in coordinating essential cellular processes and its presence in diverse human tissues suggests that intact SON might be crucial for normal growth and development. However, the phenotypic effects of deleterious germline variants in SON have not been clearly defined. Herein, we describe seven unrelated individuals with de novo variants in SON and propose that deleterious variants in SON are associated with a severe multisystem disorder characterized by developmental delay, persistent feeding difficulties, and congenital malformations, including brain anomalies. Copyright © 2016 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  6. Asymptotic behavior of total times For jobs that must start over if a failure occurs

    DEFF Research Database (Denmark)

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures to the total time when...... including failures, for any failure distribution. In particular, we show that if the task time distribution has an unbounded support then the total time distribution H is always heavy-tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  7. Asymptotic behaviour of total times for jobs that must start over if a failure occurs

    DEFF Research Database (Denmark)

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    2008-01-01

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures and the total time when...... including failures, for any failure distribution. In particular, we show that if the task-time distribution has an unbounded support, then the total-time distribution H is always heavy tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  8. Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection

    Directory of Open Access Journals (Sweden)

    Y Damchi

    2013-12-01

    Full Text Available Appropriate operation of protection system is one of the effective factors to have a desirable reliability in power systems, which vitally needs routine test of protection system. Precise determination of optimum routine test time interval (ORTTI plays a vital role in predicting the maintenance costs of protection system. In the most previous studies, ORTTI has been determined while remote back-up protection system was considered fully reliable. This assumption is not exactly correct since remote back-up protection system may operate incorrectly or fail to operate, the same as the primary protection system. Therefore, in order to determine the ORTTI, an extended Markov model is proposed in this paper considering failure probability for remote back-up protection system. In the proposed Markov model of the protection systems, monitoring facility is taken into account. Moreover, it is assumed that the primary and back-up protection systems are maintained simultaneously. Results show that the effect of remote back-up protection system failures on the reliability indices and optimum routine test intervals of protection system is considerable.

  9. Study on real-time elevator brake failure predictive system

    Science.gov (United States)

    Guo, Jun; Fan, Jinwei

    2013-10-01

    This paper presented a real-time failure predictive system of the elevator brake. Through inspecting the running state of the coil by a high precision long range laser triangulation non-contact measurement sensor, the displacement curve of the coil is gathered without interfering the original system. By analyzing the displacement data using the diagnostic algorithm, the hidden danger of the brake system can be discovered in time and thus avoid the according accident.

  10. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2018-03-01

    Full Text Available In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion.

  11. Lecture notes: meantime to failure analysis

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1976-01-01

    A method is presented which affects the Quality Assurance Engineer's place in management decision making by giving him a working parameter to base sound engineering and management decisions. The theory used in Reliability Engineering to determine the mean-time-to-failure of a component or system is reviewed. The method presented derives the probability density function for the parameter of the exponential distribution. The exponential distribution is commonly used by industry to determine the reliability of a component or system when the failure rate is assumed to be constant. Some examples of N Reactor performance data are used. To be specific: The ball system data with 4.9 x 10 6 unit hours of service and 7 individual failures indicates a demonstrated 98.8 percent reliability at a 95 percent confidence level for a 12 month mission period, and the diesel starts data with 7.2 x 10 5 unit hours of service and 1 failure indicates a demonstrated 94.4 percent reliability at a 95 percent confidence level for a 12 month mission period

  12. Probability of liquid radionuclide release of a near surface repository; Probabilidade de liberacao liquida de radionuclideos de um repositorio proximo a superficie

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, Lais A.; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: lais@con.ufrj.br; frutuoso@con.ufrj.br; Passos, Erivaldo; Alves, Antonio Sergio [ELETRONUCLEAR, Rio de Janeiro, RJ (Brazil). Div. de Seguranca Nuclear]. E-mail: epassos@eletronuclear.gov.br; asergi@eletronuclear.gov.br

    2005-07-01

    The safety analysis of a near surface repository for medium and low activity wastes leads to investigating accident scenarios related to water infiltration phenomena. The probability of radionuclide release through the infiltration water could be estimated with the aid of suitable probabilistic models. For the analysis, the repository system is divided into two subsystems: the first, due to the barriers against the water infiltration (backfill material and container), and the second one comprising the barriers against the leaching of radionuclide to the biosphere (solid matrix and geosphere). The repository system is supposed to have its components (barriers) working in an active parallel mode. The probability of the system failure is obtained from the logical structure of a failure tree. The study was based on the Probabilistic Safety Assessment (PSA) technique for the most significant radionuclides within the radioactive packages system of low and medium activity, and so the probability of failure of the system for each radionuclide during the time period of institutional control was obtained. (author)

  13. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    Science.gov (United States)

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  14. Knowing where is different from knowing what: Distinct response time profiles and accuracy effects for target location, orientation, and color probability.

    Science.gov (United States)

    Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt

    2017-11-01

    When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute.

  15. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis.

    Directory of Open Access Journals (Sweden)

    Tomoaki Chiba

    Full Text Available In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group's sales beat GM's sales, which is a reasonable scenario.

  16. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis.

    Science.gov (United States)

    Chiba, Tomoaki; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2017-01-01

    In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group's sales beat GM's sales, which is a reasonable scenario.

  17. Real-time sensor failure detection by dynamic modelling of a PWR plant

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.

    1992-06-01

    Signal validation and sensor failure detection is an important problem in real-time nuclear power plant (NPP) surveillance. Although conventional sensor redundancy, in a way, is a solution, identification of faulty sensor is necessary for further preventive actions to be taken. A comprehensive solution for the system so that any sensory reading is verified by its model based estimated counterpart, in real-time. Such a realization is accomplished by means of dynamic system's states estimation methodology using Kalman filter modelling technique. The method is investigated by means of real-time data of the steam generator of Borssele nuclear power plant and the method has proved to be satisfactory for real-time sensor failure detection as well as model validation verification. (author). 5 refs.; 6 figs.; 1 tab

  18. Measuring survival time: a probability-based approach useful in healthcare decision-making.

    Science.gov (United States)

    2011-01-01

    In some clinical situations, the choice between treatment options takes into account their impact on patient survival time. Due to practical constraints (such as loss to follow-up), survival time is usually estimated using a probability calculation based on data obtained in clinical studies or trials. The two techniques most commonly used to estimate survival times are the Kaplan-Meier method and the actuarial method. Despite their limitations, they provide useful information when choosing between treatment options.

  19. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

    International Nuclear Information System (INIS)

    Ekonomou, L; Karampelas, P; Vita, V; Chatzarakis, G E

    2011-01-01

    One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service

  20. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

    Science.gov (United States)

    Ekonomou, L.; Karampelas, P.; Vita, V.; Chatzarakis, G. E.

    2011-04-01

    One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service.

  1. Uniform Estimate of the Finite-Time Ruin Probability for All Times in a Generalized Compound Renewal Risk Model

    Directory of Open Access Journals (Sweden)

    Qingwu Gao

    2012-01-01

    Full Text Available We discuss the uniformly asymptotic estimate of the finite-time ruin probability for all times in a generalized compound renewal risk model, where the interarrival times of successive accidents and all the claim sizes caused by an accident are two sequences of random variables following a wide dependence structure. This wide dependence structure allows random variables to be either negatively dependent or positively dependent.

  2. Routine maintenance prolongs ESP time between failures

    International Nuclear Information System (INIS)

    Hurst, T.; Lannom, R.W.; Divine, D.L.

    1992-01-01

    This paper reports that routine maintenance of electric submersible motors (ESPs) significantly lengthened the mean time between motor failures (MTBF), decreased operating costs, and extended motor run life in the Sacroc Unit of the Kelly-Snyder field in West Texas. After the oil price boom of the early 1980s. rapidly eroding profit margins from producing properties caused a much stronger focus on reducing operating costs. In Sacroc, ESP operating life and repair costs became a major target of cost reduction efforts. The routine ESP maintenance program has been in place for over 3 years

  3. rpsftm: An R package for rank preserving structural failure time models

    OpenAIRE

    Allison, A.; White, I. R.; Bond, S.

    2017-01-01

    Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due ...

  4. Identifying pathogenicity of human variants via paralog-based yeast complementation.

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2017-05-01

    Full Text Available To better understand the health implications of personal genomes, we now face a largely unmet challenge to identify functional variants within disease-associated genes. Functional variants can be identified by trans-species complementation, e.g., by failure to rescue a yeast strain bearing a mutation in an orthologous human gene. Although orthologous complementation assays are powerful predictors of pathogenic variation, they are available for only a few percent of human disease genes. Here we systematically examine the question of whether complementation assays based on paralogy relationships can expand the number of human disease genes with functional variant detection assays. We tested over 1,000 paralogous human-yeast gene pairs for complementation, yielding 34 complementation relationships, of which 33 (97% were novel. We found that paralog-based assays identified disease variants with success on par with that of orthology-based assays. Combining all homology-based assay results, we found that complementation can often identify pathogenic variants outside the homologous sequence region, presumably because of global effects on protein folding or stability. Within our search space, paralogy-based complementation more than doubled the number of human disease genes with a yeast-based complementation assay for disease variation.

  5. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  6. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  7. Two novel porcine epidemic diarrhea virus (PEDV) recombinants from a natural recombinant and distinct subtypes of PEDV variants.

    Science.gov (United States)

    Chen, Nanhua; Li, Shuangjie; Zhou, Rongyun; Zhu, Meiqin; He, Shan; Ye, Mengxue; Huang, Yucheng; Li, Shuai; Zhu, Cong; Xia, Pengpeng; Zhu, Jianzhong

    2017-10-15

    Porcine epidemic diarrhea virus (PEDV) causes devastating impact on global pig-breeding industry and current vaccines have become not effective against the circulating PEDV variants since 2011. During the up-to-date investigation of PEDV prevalence in Fujian China 2016, PEDV was identified in vaccinated pig farms suffering severe diarrhea while other common diarrhea-associated pathogens were not detected. Complete genomes of two PEDV representatives (XM1-2 and XM2-4) were determined. Genomic comparison showed that these two viruses share the highest nucleotide identities (99.10% and 98.79%) with the 2011 ZMDZY strain, but only 96.65% and 96.50% nucleotide identities with the attenuated CV777 strain. Amino acid alignment of spike (S) proteins indicated that they have the similar mutation, insertion and deletion pattern as other Chinese PEDV variants but also contain several unique substitutions. Phylogenetic analysis showed that 2016 PEDV variants belong to the cluster of recombination strains but form a new branch. Recombination detection suggested that both XM1-2 and XM2-4 are inter-subgroup recombinants with breakpoints within ORF1b. Remarkably, the natural recombinant HNQX-3 isolate serves as a parental virus for both natural recombinants identified in this study. This up-to-date investigation provides the direct evidence that natural recombinants may serve as parental viruses to generate recombined PEDV progenies that are probably associated with the vaccination failure. Copyright © 2017. Published by Elsevier B.V.

  8. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  9. FRELIB, Failure Reliability Index Calculation

    International Nuclear Information System (INIS)

    Parkinson, D.B.; Oestergaard, C.

    1984-01-01

    1 - Description of problem or function: Calculation of the reliability index given the failure boundary. A linearization point (design point) is found on the failure boundary for a stationary reliability index (min) and a stationary failure probability density function along the failure boundary, provided that the basic variables are normally distributed. 2 - Method of solution: Iteration along the failure boundary which must be specified - together with its partial derivatives with respect to the basic variables - by the user in a subroutine FSUR. 3 - Restrictions on the complexity of the problem: No distribution information included (first-order-second-moment-method). 20 basic variables (could be extended)

  10. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  11. Probability of fracture and life extension estimate of the high-flux isotope reactor vessel

    International Nuclear Information System (INIS)

    Chang, S.J.

    1998-01-01

    The state of the vessel steel embrittlement as a result of neutron irradiation can be measured by its increase in ductile-brittle transition temperature (DBTT) for fracture, often denoted by RT NDT for carbon steel. This transition temperature can be calibrated by the drop-weight test and, sometimes, by the Charpy impact test. The life extension for the high-flux isotope reactor (HFIR) vessel is calculated by using the method of fracture mechanics that is incorporated with the effect of the DBTT change. The failure probability of the HFIR vessel is limited as the life of the vessel by the reactor core melt probability of 10 -4 . The operating safety of the reactor is ensured by periodic hydrostatic pressure test (hydrotest). The hydrotest is performed in order to determine a safe vessel static pressure. The fracture probability as a result of the hydrostatic pressure test is calculated and is used to determine the life of the vessel. Failure to perform hydrotest imposes the limit on the life of the vessel. The conventional method of fracture probability calculations such as that used by the NRC-sponsored PRAISE CODE and the FAVOR CODE developed in this Laboratory are based on the Monte Carlo simulation. Heavy computations are required. An alternative method of fracture probability calculation by direct probability integration is developed in this paper. The present approach offers simple and expedient ways to obtain numerical results without losing any generality. In this paper, numerical results on (1) the probability of vessel fracture, (2) the hydrotest time interval, and (3) the hydrotest pressure as a result of the DBTT increase are obtained

  12. Protein aggregates and novel presenilin gene variants in idiopathic dilated cardiomyopathy.

    Science.gov (United States)

    Gianni, Davide; Li, Airong; Tesco, Giuseppina; McKay, Kenneth M; Moore, John; Raygor, Kunal; Rota, Marcello; Gwathmey, Judith K; Dec, G William; Aretz, Thomas; Leri, Annarosa; Semigran, Marc J; Anversa, Piero; Macgillivray, Thomas E; Tanzi, Rudolph E; del Monte, Federica

    2010-03-16

    Heart failure is a debilitating condition resulting in severe disability and death. In a subset of cases, clustered as idiopathic dilated cardiomyopathy (iDCM), the origin of heart failure is unknown. In the brain of patients with dementia, proteinaceous aggregates and abnormal oligomeric assemblies of beta-amyloid impair cell function and lead to cell death. We have similarly characterized fibrillar and oligomeric assemblies in the hearts of iDCM patients, pointing to abnormal protein aggregation as a determinant of iDCM. We also showed that oligomers alter myocyte Ca(2+) homeostasis. Additionally, we have identified 2 new sequence variants in the presenilin-1 (PSEN1) gene promoter leading to reduced gene and protein expression. We also show that presenilin-1 coimmunoprecipitates with SERCA2a. On the basis of these findings, we propose that 2 mechanisms may link protein aggregation and cardiac function: oligomer-induced changes on Ca(2+) handling and a direct effect of PSEN1 sequence variants on excitation-contraction coupling protein function.

  13. Weighting sequence variants based on their annotation increases power of whole-genome association studies

    DEFF Research Database (Denmark)

    Sveinbjornsson, Gardar; Albrechtsen, Anders; Zink, Florian

    2016-01-01

    The consensus approach to genome-wide association studies (GWAS) has been to assign equal prior probability of association to all sequence variants tested. However, some sequence variants, such as loss-of-function and missense variants, are more likely than others to affect protein function...... for the family-wise error rate (FWER), using as weights the enrichment of sequence annotations among association signals. We show that this weighted adjustment increases the power to detect association over the standard Bonferroni correction. We use the enrichment of associations by sequence annotation we have...

  14. A time-variant analysis of the 1/f^(2) phase noise in CMOS parallel LC-Tank quadrature oscillators

    DEFF Research Database (Denmark)

    Andreani, Pietro

    2006-01-01

    This paper presents a study of 1/f2 phase noise in quadrature oscillators built by connecting two differential LC-tank oscillators in a parallel fashion. The analysis clearly demonstrates the necessity of adopting a time-variant theory of phase noise, where a more simplistic, time...

  15. Probabilitic analysis for fatigue failure of leg-supported liquid containers under random earthquake-type excitation

    International Nuclear Information System (INIS)

    Fujita, Takafumi

    1981-01-01

    Leg-supported cylindrical containers frequently used for nuclear power plants and chemical plants and leg-supported rectangular containers such as water and fuel tanks are the structures, of which the reliability is feared at the time of earthquakes. In this study, about such leg-supported liquid containers, the structural reliability of the system at the time of earthquakes was analyzed from the viewpoint of fatigue failure at the joints of tanks and supporting legs and the fixing parts of legs. The second order unsteady coupled probability density of response displacement and response velocity and the first and second order unsteady probability density of response displacement envelope were determined, then using the results, the expected value, variance and unsteady probability density of cumulative damage were obtained on the basis of Miner's law, thus the structural reliability of the system was analyzed. The result of analysis was verified with the results of vibration tests using many simulated earthquake waves, and the experiment of the fatigue failure of a model with sine wave vibration was carried out. The mechanical model for the analysis, the unsteady probability density described above, the analysis of structural reliability and the experiment are reported. (Kako, I.)

  16. An estimation method of system failure frequency using both structure and component failure data

    International Nuclear Information System (INIS)

    Takaragi, Kazuo; Sasaki, Ryoichi; Shingai, Sadanori; Tominaga, Kenji

    1981-01-01

    In recent years, the importance of reliability analysis is appreciated for large systems such as nuclear power plants. A reliability analysis method is described for a whole system, using structure failure data for its main working subsystem and component failure data for its safety protection subsystem. The subsystem named main working system operates normally, and the subsystem named safety protection system acts as standby or protection. Thus the main and the protection systems are given mutually different failure data; then, between the subsystems, there exists common mode failure, i.e. the component failure affecting the reliability of both two. A calculation formula for sytem failure frequency is first derived. Then, a calculation method with digraphs is proposed for conditional system failure probability. Finally the results of numerical calculation are given for the purpose of explanation. (J.P.N.)

  17. Failure probabilities of SiC clad fuel during a LOCA in public acceptable simple SMR (PASS)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youho, E-mail: euo@kaist.ac.kr; Kim, Ho Sik, E-mail: hskim25@kaist.ac.kr; NO, Hee Cheon, E-mail: hcno@kaist.ac.kr

    2015-10-15

    Highlights: • Graceful operating conditions of SMRs markedly lower SiC cladding stress. • Steady-state fracture probabilities of SiC cladding is below 10{sup −7} in SMRs. • PASS demonstrates fuel coolability (T < 1300 °C) with sole radiation in LOCA. • SiC cladding failure probabilities of PASS are ∼10{sup −2} in LOCA. • Cold gas gap pressure controls SiC cladding tensile stress level in LOCA. - Abstract: Structural integrity of SiC clad fuels in reference Small Modular Reactors (SMRs) (NuScale, SMART, IRIS) and a commercial pressurized water reactor (PWR) are assessed with a multi-layered SiC cladding structural analysis code. Featured with low fuel pin power and temperature, SMRs demonstrate markedly reduced incore-residence fracture probabilities below ∼10{sup −7}, compared to those of commercial PWRs ∼10{sup −6}–10{sup −1}. This demonstrates that SMRs can serve as a near-term deployment fit to SiC cladding with a sound management of its statistical brittle fracture. We proposed a novel SMR named Public Acceptable Simple SMR (PASS), which is featured with 14 × 14 assemblies of SiC clad fuels arranged in a square ring layout. PASS aims to rely on radiative cooling of fuel rods during a loss of coolant accident (LOCA) by fully leveraging high temperature tolerance of SiC cladding. An overarching assessment of SiC clad fuel performance in PASS was conducted with a combined methodology—(1) FRAPCON-SiC for steady-state performance analysis of PASS fuel rods, (2) computational fluid dynamics code FLUENT for radiative cooling rate of fuel rods during a LOCA, and (3) multi-layered SiC cladding structural analysis code with previously developed SiC recession correlations under steam environments for both steady-state and LOCA. The results show that PASS simultaneously maintains desirable fuel cooling rate with the sole radiation and sound structural integrity of fuel rods for over 36 days of a LOCA without water supply. The stress level of

  18. Calculation of Fire Severity Factors and Fire Non-Suppression Probabilities For A DOE Facility Fire PRA

    International Nuclear Information System (INIS)

    Elicson, Tom; Harwood, Bentley; Lucek, Heather; Bouchard, Jim

    2011-01-01

    Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. The fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: Development of time-dependent fire heat release rate profiles (required as input to CFAST), Calculation of fire severity factors based on CFAST detailed fire modeling, and Calculation of fire non-suppression probabilities.

  19. Reliability analysis based on the losses from failures.

    Science.gov (United States)

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  20. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  1. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waler, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1977-01-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure-rate parameter, lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this paper is to present a methodology which can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate, lambda, simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) = 0.50 and P(lambda less than 1.0 x 10 -5 ) = 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure-rate percentiles illustrated above, one can use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) = 0.50 and P(R(t 0 ) less than 0.99999) = 0.95 for some operating time t 0 . Also, the paper includes graphs for selected percentiles which assist an engineer in applying the methodology

  2. On the Possibility of Assigning Probabilities to Singular Cases, or: Probability Is Subjective Too!

    Directory of Open Access Journals (Sweden)

    Mark R. Crovelli

    2009-06-01

    Full Text Available Both Ludwig von Mises and Richard von Mises claimed that numerical probability could not be legitimately applied to singular cases. This paper challenges this aspect of the von Mises brothers’ theory of probability. It is argued that their denial that numerical probability could be applied to singular cases was based solely upon Richard von Mises’ exceptionally restrictive definition of probability. This paper challenges Richard von Mises’ definition of probability by arguing that the definition of probability necessarily depends upon whether the world is governed by time-invariant causal laws. It is argued that if the world is governed by time-invariant causal laws, a subjective definition of probability must be adopted. It is further argued that both the nature of human action and the relative frequency method for calculating numerical probabilities both presuppose that the world is indeed governed by time-invariant causal laws. It is finally argued that the subjective definition of probability undercuts the von Mises claim that numerical probability cannot legitimately be applied to singular, non-replicable cases.

  3. Sensor failure and multivariable control for airbreathing propulsion systems. Ph.D. Thesis - Dec. 1979 Final Report

    Science.gov (United States)

    Behbehani, K.

    1980-01-01

    A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.

  4. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    Science.gov (United States)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  5. Bounding probabilistic safety assessment probabilities by reality

    International Nuclear Information System (INIS)

    Fragola, J.R.; Shooman, M.L.

    1991-01-01

    The investigation of the failure in systems where failure is a rare event makes the continual comparisons between the developed probabilities and empirical evidence difficult. The comparison of the predictions of rare event risk assessments with historical reality is essential to prevent probabilistic safety assessment (PSA) predictions from drifting into fantasy. One approach to performing such comparisons is to search out and assign probabilities to natural events which, while extremely rare, have a basis in the history of natural phenomena or human activities. For example the Segovian aqueduct and some of the Roman fortresses in Spain have existed for several millennia and in many cases show no physical signs of earthquake damage. This evidence could be used to bound the probability of earthquakes above a certain magnitude to less than 10 -3 per year. On the other hand, there is evidence that some repetitive actions can be performed with extremely low historical probabilities when operators are properly trained and motivated, and sufficient warning indicators are provided. The point is not that low probability estimates are impossible, but continual reassessment of the analysis assumptions, and a bounding of the analysis predictions by historical reality. This paper reviews the probabilistic predictions of PSA in this light, attempts to develop, in a general way, the limits which can be historically established and the consequent bounds that these limits place upon the predictions, and illustrates the methodology used in computing such limits. Further, the paper discusses the use of empirical evidence and the requirement for disciplined systematic approaches within the bounds of reality and the associated impact on PSA probabilistic estimates

  6. A Population Based Study of the Genetic Association between Catecholamine Gene Variants and Spontaneous Low-Frequency Fluctuations in Reaction Time.

    Directory of Open Access Journals (Sweden)

    Jojanneke A Bastiaansen

    Full Text Available The catecholamines dopamine and noradrenaline have been implicated in spontaneous low-frequency fluctuations in reaction time, which are associated with attention deficit hyperactivity disorder (ADHD and subclinical attentional problems. The molecular genetic substrates of these behavioral phenotypes, which reflect frequency ranges of intrinsic neuronal oscillations (Slow-4: 0.027-0.073 Hz; Slow-5: 0.010-0.027 Hz, have not yet been investigated. In this study, we performed regression analyses with an additive model to examine associations between low-frequency fluctuations in reaction time during a sustained attention task and genetic markers across 23 autosomal catecholamine genes in a large young adult population cohort (n = 964, which yielded greater than 80% power to detect a small effect size (f(2 = 0.02 and 100% power to detect a small/medium effect size (f(2 = 0.15. At significance levels corrected for multiple comparisons, none of the gene variants were associated with the magnitude of low-frequency fluctuations. Given the study's strong statistical power and dense coverage of the catecholamine genes, this either indicates that associations between low-frequency fluctuation measures and catecholamine gene variants are absent or that they are of very small effect size. Nominally significant associations were observed between variations in the alpha-2A adrenergic receptor gene (ADRA2A and the Slow-5 band. This is in line with previous reports of an association between ADRA2A gene variants and general reaction time variability during response selection tasks, but the specific association of these gene variants and low-frequency fluctuations requires further confirmation. Pharmacological challenge studies could in the future provide convergent evidence for the noradrenergic modulation of both general and time sensitive measures of intra-individual variability in reaction time.

  7. Markov transition probability-based network from time series for characterizing experimental two-phase flow

    International Nuclear Information System (INIS)

    Gao Zhong-Ke; Hu Li-Dan; Jin Ning-De

    2013-01-01

    We generate a directed weighted complex network by a method based on Markov transition probability to represent an experimental two-phase flow. We first systematically carry out gas—liquid two-phase flow experiments for measuring the time series of flow signals. Then we construct directed weighted complex networks from various time series in terms of a network generation method based on Markov transition probability. We find that the generated network inherits the main features of the time series in the network structure. In particular, the networks from time series with different dynamics exhibit distinct topological properties. Finally, we construct two-phase flow directed weighted networks from experimental signals and associate the dynamic behavior of gas-liquid two-phase flow with the topological statistics of the generated networks. The results suggest that the topological statistics of two-phase flow networks allow quantitative characterization of the dynamic flow behavior in the transitions among different gas—liquid flow patterns. (general)

  8. Dynamic failure of dry and fully saturated limestone samples based on incubation time concept

    Directory of Open Access Journals (Sweden)

    Yuri V. Petrov

    2017-02-01

    Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.

  9. Importance analysis for the systems with common cause failures

    International Nuclear Information System (INIS)

    Pan Zhijie; Nonaka, Yasuo

    1995-01-01

    This paper extends the importance analysis technique to the research field of common cause failures to evaluate the structure importance, probability importance, and β-importance for the systems with common cause failures. These importance measures would help reliability analysts to limit the common cause failure analysis framework and find efficient defence strategies against common cause failures

  10. The ruin probability of a discrete time risk model under constant interest rate with heavy tails

    NARCIS (Netherlands)

    Tang, Q.

    2004-01-01

    This paper investigates the ultimate ruin probability of a discrete time risk model with a positive constant interest rate. Under the assumption that the gross loss of the company within one year is subexponentially distributed, a simple asymptotic relation for the ruin probability is derived and

  11. F-15 inlet/engine test techniques and distortion methodologies studies. Volume 2: Time variant data quality analysis plots

    Science.gov (United States)

    Stevens, C. H.; Spong, E. D.; Hammock, M. S.

    1978-01-01

    Time variant data quality analysis plots were used to determine if peak distortion data taken from a subscale inlet model can be used to predict peak distortion levels for a full scale flight test vehicle.

  12. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  13. PROBABILITY OF FAILURE OF THE TRUDOCK CRANE SYSTEM AT THE WASTE ISOLATION PILOT PLANT (WIPP)

    International Nuclear Information System (INIS)

    Greenfield, M.A.; Sargent, T.J.

    2000-01-01

    This probabilistic analysis of WIPP TRUDOCK crane failure is based on two sources of failure data. The source for operator errors is the report by Swain and Guttman, NUREG/CR-1278-F, August 1983. The source for crane cable hook breaks was initially made by WIPP/WID-96- 2196, Rev. O by using relatively old (1970s) U.S. Navy data (NUREG-0612). However, a helpful analysis by R.K. Deremer of PLG guided the authors to values that were more realistic and more conservative, with the recommendation that the crane cable/hook failure rate should be 2.5 x 10-6 per demand. This value was adopted and used. Based on these choices a mean failure rate of 9.70 x 10-3(1/yr) was calculated. However, a mean rate by itself does not reveal the level of confidence to be associated with this number. Guidance to making confidence calculations came from the report by Swain and Guttman, who stated that failure data could be described by lognormal distributions. This is in agreement with the widely use d reports (by DOE and others) NPRD-95 and NPRD-91, on failure data. The calculations of confidence levels showed that the mean failure rate of 9.70x 10-3(1/yr) corresponded to a percentile value of approximately 71; i.e. there is a 71% likelihood that the failure rate is less than 9.70x 10-3(1/yr). One also calculated that there is a 95% likelihood that the failure rate is less than 29.6x 10-3(1/yr). Or, as stated previously, there is a 71% likelihood that not more than one dropped load will occur in 103 years. Also, there is a 95% likelihood that not more than one dropped load will occur in approximately 34 years. It is the responsibility of DOE to select the confidence level at which it desires to operate

  14. Effects of preparation time and trial type probability on performance of anti- and pro-saccades.

    Science.gov (United States)

    Pierce, Jordan E; McDowell, Jennifer E

    2016-02-01

    Cognitive control optimizes responses to relevant task conditions by balancing bottom-up stimulus processing with top-down goal pursuit. It can be investigated using the ocular motor system by contrasting basic prosaccades (look toward a stimulus) with complex antisaccades (look away from a stimulus). Furthermore, the amount of time allotted between trials, the need to switch task sets, and the time allowed to prepare for an upcoming saccade all impact performance. In this study the relative probabilities of anti- and pro-saccades were manipulated across five blocks of interleaved trials, while the inter-trial interval and trial type cue duration were varied across subjects. Results indicated that inter-trial interval had no significant effect on error rates or reaction times (RTs), while a shorter trial type cue led to more antisaccade errors and faster overall RTs. Responses following a shorter cue duration also showed a stronger effect of trial type probability, with more antisaccade errors in blocks with a low antisaccade probability and slower RTs for each saccade task when its trial type was unlikely. A longer cue duration yielded fewer errors and slower RTs, with a larger switch cost for errors compared to a short cue duration. Findings demonstrated that when the trial type cue duration was shorter, visual motor responsiveness was faster and subjects relied upon the implicit trial probability context to improve performance. When the cue duration was longer, increased fixation-related activity may have delayed saccade motor preparation and slowed responses, guiding subjects to respond in a controlled manner regardless of trial type probability. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A Retrospective Study of Success, Failure, and Time Needed to Perform Awake Intubation.

    Science.gov (United States)

    Joseph, Thomas T; Gal, Jonathan S; DeMaria, Samuel; Lin, Hung-Mo; Levine, Adam I; Hyman, Jaime B

    2016-07-01

    Awake intubation is the standard of care for management of the anticipated difficult airway. The performance of awake intubation may be perceived as complex and time-consuming, potentially leading clinicians to avoid this technique of airway management. This retrospective review of awake intubations at a large academic medical center was performed to determine the average time taken to perform awake intubation, its effects on hemodynamics, and the incidence and characteristics of complications and failure. Anesthetic records from 2007 to 2014 were queried for the performance of an awake intubation. Of the 1,085 awake intubations included for analysis, 1,055 involved the use of a flexible bronchoscope. Each awake intubation case was propensity matched with two controls (1:2 ratio), with similar comorbidities and intubations performed after the induction of anesthesia (n = 2,170). The time from entry into the operating room until intubation was compared between groups. The anesthetic records of all patients undergoing awake intubation were also reviewed for failure and complications. The median time to intubation for patients intubated post induction was 16.0 min (interquartile range: 13 to 22) from entrance into the operating room. The median time to intubation for awake patients was 24.0 min (interquartile range: 19 to 31). The complication rate was 1.6% (17 of 1,085 cases). The most frequent complications observed were mucous plug, endotracheal tube cuff leak, and inadvertent extubation. The failure rate for attempted awake intubation was 1% (n = 10). Awake intubations have a high rate of success and low rate of serious complications and failure. Awake intubations can be performed safely and rapidly.

  16. The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors

    International Nuclear Information System (INIS)

    Duffey, Romney B.; Saull, John W.

    2006-01-01

    Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum

  17. Time series modeling of pathogen-specific disease probabilities with subsampled data.

    Science.gov (United States)

    Fisher, Leigh; Wakefield, Jon; Bauer, Cici; Self, Steve

    2017-03-01

    Many diseases arise due to exposure to one of multiple possible pathogens. We consider the situation in which disease counts are available over time from a study region, along with a measure of clinical disease severity, for example, mild or severe. In addition, we suppose a subset of the cases are lab tested in order to determine the pathogen responsible for disease. In such a context, we focus interest on modeling the probabilities of disease incidence given pathogen type. The time course of these probabilities is of great interest as is the association with time-varying covariates such as meteorological variables. In this set up, a natural Bayesian approach would be based on imputation of the unsampled pathogen information using Markov Chain Monte Carlo but this is computationally challenging. We describe a practical approach to inference that is easy to implement. We use an empirical Bayes procedure in a first step to estimate summary statistics. We then treat these summary statistics as the observed data and develop a Bayesian generalized additive model. We analyze data on hand, foot, and mouth disease (HFMD) in China in which there are two pathogens of primary interest, enterovirus 71 (EV71) and Coxackie A16 (CA16). We find that both EV71 and CA16 are associated with temperature, relative humidity, and wind speed, with reasonably similar functional forms for both pathogens. The important issue of confounding by time is modeled using a penalized B-spline model with a random effects representation. The level of smoothing is addressed by a careful choice of the prior on the tuning variance. © 2016, The International Biometric Society.

  18. Left passage probability of Schramm-Loewner Evolution

    Science.gov (United States)

    Najafi, M. N.

    2013-06-01

    SLE(κ,ρ⃗) is a variant of Schramm-Loewner Evolution (SLE) which describes the curves which are not conformal invariant, but are self-similar due to the presence of some other preferred points on the boundary. In this paper we study the left passage probability (LPP) of SLE(κ,ρ⃗) through field theoretical framework and find the differential equation governing this probability. This equation is numerically solved for the special case κ=2 and hρ=0 in which hρ is the conformal weight of the boundary changing (bcc) operator. It may be referred to loop erased random walk (LERW) and Abelian sandpile model (ASM) with a sink on its boundary. For the curve which starts from ξ0 and conditioned by a change of boundary conditions at x0, we find that this probability depends significantly on the factor x0-ξ0. We also present the perturbative general solution for large x0. As a prototype, we apply this formalism to SLE(κ,κ-6) which governs the curves that start from and end on the real axis.

  19. Retrieval system for emplaced spent unreprocessed fuel (SURF) in salt bed depository. Baseline concept criteria specifications and mechanical failure probabilities

    International Nuclear Information System (INIS)

    Hudson, E.E.; McCleery, J.E.

    1979-05-01

    One of the integral elements of the Nuclear Waste Management Program is the material handling task of retrieving Canisters containing spent unreprocessed fuel from their emplacement in a deep geologic salt bed Depository. A study of the retrieval concept data base predicated this report. In this report, alternative concepts for the tasks are illustrated and critiqued, a baseline concept in scenario form is derived and basic retrieval subsystem specifications are presented with cyclic failure probabilities predicted. The report is based on the following assumptions: (a) during retrieval, a temporary radiation seal is placed over each Canister emplacement; (b) a sleeve, surrounding the Canister, was initially installed during the original emplacement; (c) the emplacement room's physical and environmental conditions established in this report are maintained while the task is performed

  20. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    Energy Technology Data Exchange (ETDEWEB)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul [Stanford University, Stanford, California 94394 (United States); Varian Medical Systems, Palo Alto, California 94304 (United States); Stanford University, Stanford, California 94394 (United States)

    2010-12-15

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN{>=}125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be {approx}193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was {approx}35 min, while that taken for comprehensive testing was {approx}3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures

  1. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems.

    Science.gov (United States)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-12-01

    To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Failures modes with RPN > or = 125 were recommended to be tested monthly. Failure modes with RPN < 125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be approximately 193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was approximately 35 min, while that taken for comprehensive testing was approximately 3.5 h. FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation

  2. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    International Nuclear Information System (INIS)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation

  3. Imaging findings in the rare catastrophic variant of the primary antiphospholipid syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Thuerl, Christina; Altehoefer, Carsten; Laubenberger, Joerg [Freiburg Univ. (Germany). Abt. Radiologie; Spyridonidis, Alexandros [Freiburg Univ. (DE). Abt. Innere Medizin 1 (Haematologie und Onkologie)

    2002-03-01

    We report imaging findings in a case of the rare catastrophic variant of antiphospholipid syndrome (CAPS) characterized by widespread microvascular occlusions, which may lead to multiple organ failure. We present a case of a 66-year-old woman with bone marrow necrosis, acute acalculous cholecystitis (AAC), focal liver necrosis, subtle patchy splenic infarctions, and bilateral adrenal infarction. The demonstration of multiple microvascular organ involvement (three or more) is crucial for the diagnosis of the catastrophic variant of APS. This can be performed radiologically intra-vitam. Imaging can even reveal subclinical microinfarctions, which are often only diagnosed at autopsy. (orig.)

  4. Imaging findings in the rare catastrophic variant of the primary antiphospholipid syndrome

    International Nuclear Information System (INIS)

    Thuerl, Christina; Altehoefer, Carsten; Laubenberger, Joerg

    2002-01-01

    We report imaging findings in a case of the rare catastrophic variant of antiphospholipid syndrome (CAPS) characterized by widespread microvascular occlusions, which may lead to multiple organ failure. We present a case of a 66-year-old woman with bone marrow necrosis, acute acalculous cholecystitis (AAC), focal liver necrosis, subtle patchy splenic infarctions, and bilateral adrenal infarction. The demonstration of multiple microvascular organ involvement (three or more) is crucial for the diagnosis of the catastrophic variant of APS. This can be performed radiologically intra-vitam. Imaging can even reveal subclinical microinfarctions, which are often only diagnosed at autopsy. (orig.)

  5. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    Science.gov (United States)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean

  6. Improved methods for dependent failure analysis in PSA

    International Nuclear Information System (INIS)

    Ballard, G.M.; Games, A.M.

    1988-01-01

    The basic design principle used in ensuring the safe operation of nuclear power plant is defence in depth. This normally takes the form of redundant equipment and systems which provide protection even if a number of equipment failures occur. Such redundancy is particularly effective in ensuring that multiple, independent equipment failures with the potential for jeopardising reactor safety will be rare events. However the achievement of high reliability has served to highlight the potentially dominant role of multiple, dependent failures of equipment and systems. Analysis of reactor operating experience has shown that dependent failure events are the major contributors to safety system failures and reactor incidents and accidents. In parallel PSA studies have shown that the results of a safety analysis are sensitive to assumptions made about the dependent failure (CCF) probability for safety systems. Thus a Westinghouse Analysis showed that increasing system dependent failure probabilities by a factor of 5 led to a factor 4 increase in core. This paper particularly refers to the engineering concepts underlying dependent failure assessment touching briefly on aspects of data. It is specifically not the intent of our work to develop a new mathematical model of CCF but to aid the use of existing models

  7. Clear-cell variant urothelial carcinoma of the bladder: a case report and review of the literature

    Directory of Open Access Journals (Sweden)

    Hossein Tezval

    2012-10-01

    Full Text Available Clear cell variants of transitional cell carcinomas (TCC of the bladder are extremely rare tumors. Only 6 cases have been reported until now. We report of a 67 year old man who presented with fast growing tumor disease. While initial diagnosis showed localized bladder tumor, final histopathology revealed pT4, G3, L1 urothelial carcinoma with clear cell differentiation. No more than 14 weeks after initial diagnosis the patient died from multi-organ failure after unsuccessful salvage laparotomy which showed massive tumor burden within the pelvis and peritoneal carcinosis. This case demonstrated an extremely fast tumor growth. Therefore, patients with clear cell urothelial carcinoma should be treated vigorously and without time delay. We present a case of clear cell variant of TCC which exhibited an extremely aggressive behavior. To our knowledge this is the fifth report of this rare disease.

  8. Time to second prostate specific antigen (PSA) failure is a surrogate endpoint for prostate cancer death in prospective trials of therapy for localized disease

    Energy Technology Data Exchange (ETDEWEB)

    Zietman, A L; Dallow, K C; Shipley, W U; Heney, N M; McManus, P L

    1995-07-01

    Purpose In assessing the efficacy of the competing curative therapies for prostate cancer the most relevant endpoint is cancer specific death. Due to the long natural history of the disease and the use of salvage androgen suppression prospective trials need to mature for at least a decade to provide meaningful results. An endpoint that predicted for cancer death with high probability would allow more rapid completion of prospective studies, hopefully before the tested therapies become outdated. Materials and methods 202 patients entered into a single institution prospective randomized study for T3-4 prostate cancer between 1982 and 1992 were evaluated. All received radical irradiation to either a standard dose of 67.2Gy or a higher dose of 75.6Gy (the latter employing a proton beam boost). 76 men have received androgen suppression or orchiectomy for salvage following relapse (median follow-up 6.9 years). Of this group 35 experienced a second relapse heralded by a rise in the serum PSA. Second failure was scored on the date that the serum PSA rose to greater than 10% above the post-androgen suppression nadir. Kaplan-Meier analysis was made of survival from the time of second PSA failure and the cause of death established in all patients who subsequently died. Results The median duration of response to hormone therapy following first failure was 27.2 months. The actuarial survival from the time of second biochemical relapse was 93%, 66%, 35%, and 0% at 1, 2, 3, and 4 years respectively (50% at 32 months). 16 patients have so far died after second failure all from causes related to their prostate cancer. Conclusion Second PSA failure appears to be a secure surrogate for impending prostate cancer death. Its use as an endpoint in prospective studies should allow earlier reporting by 2 - 3 years.

  9. Re-Ranking Sequencing Variants in the Post-GWAS Era for Accurate Causal Variant Identification

    Science.gov (United States)

    Faye, Laura L.; Machiela, Mitchell J.; Kraft, Peter; Bull, Shelley B.; Sun, Lei

    2013-01-01

    Next generation sequencing has dramatically increased our ability to localize disease-causing variants by providing base-pair level information at costs increasingly feasible for the large sample sizes required to detect complex-trait associations. Yet, identification of causal variants within an established region of association remains a challenge. Counter-intuitively, certain factors that increase power to detect an associated region can decrease power to localize the causal variant. First, combining GWAS with imputation or low coverage sequencing to achieve the large sample sizes required for high power can have the unintended effect of producing differential genotyping error among SNPs. This tends to bias the relative evidence for association toward better genotyped SNPs. Second, re-use of GWAS data for fine-mapping exploits previous findings to ensure genome-wide significance in GWAS-associated regions. However, using GWAS findings to inform fine-mapping analysis can bias evidence away from the causal SNP toward the tag SNP and SNPs in high LD with the tag. Together these factors can reduce power to localize the causal SNP by more than half. Other strategies commonly employed to increase power to detect association, namely increasing sample size and using higher density genotyping arrays, can, in certain common scenarios, actually exacerbate these effects and further decrease power to localize causal variants. We develop a re-ranking procedure that accounts for these adverse effects and substantially improves the accuracy of causal SNP identification, often doubling the probability that the causal SNP is top-ranked. Application to the NCI BPC3 aggressive prostate cancer GWAS with imputation meta-analysis identified a new top SNP at 2 of 3 associated loci and several additional possible causal SNPs at these loci that may have otherwise been overlooked. This method is simple to implement using R scripts provided on the author's website. PMID:23950724

  10. Probability of extreme interference levels computed from reliability approaches: application to transmission lines with uncertain parameters

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)

  11. Serviceability Assessment for Cascading Failures in Water Distribution Network under Seismic Scenario

    Directory of Open Access Journals (Sweden)

    Qing Shuang

    2016-01-01

    Full Text Available The stability of water service is a hot point in industrial production, public safety, and academic research. The paper establishes a service evaluation model for the water distribution network (WDN. The serviceability is measured in three aspects: (1 the functionality of structural components under disaster environment; (2 the recognition of cascading failure process; and (3 the calculation of system reliability. The node and edge failures in WDN are interrelated under seismic excitations. The cascading failure process is provided with the balance of water supply and demand. The matrix-based system reliability (MSR method is used to represent the system events and calculate the nonfailure probability. An example is used to illustrate the proposed method. The cascading failure processes with different node failures are simulated. The serviceability is analyzed. The critical node can be identified. The result shows that the aged network has a greater influence on the system service under seismic scenario. The maintenance could improve the antidisaster ability of WDN. Priority should be given to controlling the time between the initial failure and the first secondary failure, for taking postdisaster emergency measures within this time period can largely cut down the spread of cascade effect in the whole WDN.

  12. Change over time in the effect of grade and ER on risk of distant failure in patients treated with breast-conserving therapy

    International Nuclear Information System (INIS)

    Gelman, Rebecca; Nixon, Asa J.; O'Neill, Anne; Harris, Jay R.

    1996-01-01

    violated the assumption of proportional hazards. The risk of distant failure associated with grade III is high at first, but steadily decreases over at least 8 years of followup. In patients who are free of distant disease at 3 years of followup, the subsequent risk of distant failure associated with a grade III tumor was actually lower than that associated with a grade II tumor. By 10 years, the cumulative rate of distant failure was the same for grade III and grade II. These results suggest that grade may affect the growth rate of distant micrometastases present at the time of original diagnosis but may not affect the probability that such micrometastases are present. An alternative explanation is that grade III tumors represent a heterogeneous group, one subset of which all have very early distant failure. An analysis of hazard ratios over time is important in understanding prognostic factors and the prohaz model

  13. Time-variant random interval natural frequency analysis of structures

    Science.gov (United States)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  14. Bidirectional Cardio-Respiratory Interactions in Heart Failure

    Directory of Open Access Journals (Sweden)

    Nikola N. Radovanović

    2018-03-01

    Full Text Available We investigated cardio-respiratory coupling in patients with heart failure by quantification of bidirectional interactions between cardiac (RR intervals and respiratory signals with complementary measures of time series analysis. Heart failure patients were divided into three groups of twenty, age and gender matched, subjects: with sinus rhythm (HF-Sin, with sinus rhythm and ventricular extrasystoles (HF-VES, and with permanent atrial fibrillation (HF-AF. We included patients with indication for implantation of implantable cardioverter defibrillator or cardiac resynchronization therapy device. ECG and respiratory signals were simultaneously acquired during 20 min in supine position at spontaneous breathing frequency in 20 healthy control subjects and in patients before device implantation. We used coherence, Granger causality and cross-sample entropy analysis as complementary measures of bidirectional interactions between RR intervals and respiratory rhythm. In heart failure patients with arrhythmias (HF-VES and HF-AF there is no coherence between signals (p < 0.01, while in HF-Sin it is reduced (p < 0.05, compared with control subjects. In all heart failure groups causality between signals is diminished, but with significantly stronger causality of RR signal in respiratory signal in HF-VES. Cross-sample entropy analysis revealed the strongest synchrony between respiratory and RR signal in HF-VES group. Beside respiratory sinus arrhythmia there is another type of cardio-respiratory interaction based on the synchrony between cardiac and respiratory rhythm. Both of them are altered in heart failure patients. Respiratory sinus arrhythmia is reduced in HF-Sin patients and vanished in heart failure patients with arrhythmias. Contrary, in HF-Sin and HF-VES groups, synchrony increased, probably as consequence of some dominant neural compensatory mechanisms. The coupling of cardiac and respiratory rhythm in heart failure patients varies depending on the

  15. Probabilistic analysis on the failure of reactivity control for the PWR

    Science.gov (United States)

    Sony Tjahyani, D. T.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The fundamental safety function of the power reactor is to control reactivity, to remove heat from the reactor, and to confine radioactive material. The safety analysis is used to ensure that each parameter is fulfilled during the design and is done by deterministic and probabilistic method. The analysis of reactivity control is important to be done because it will affect the other of fundamental safety functions. The purpose of this research is to determine the failure probability of the reactivity control and its failure contribution on a PWR design. The analysis is carried out by determining intermediate events, which cause the failure of reactivity control. Furthermore, the basic event is determined by deductive method using the fault tree analysis. The AP1000 is used as the object of research. The probability data of component failure or human error, which is used in the analysis, is collected from IAEA, Westinghouse, NRC and other published documents. The results show that there are six intermediate events, which can cause the failure of the reactivity control. These intermediate events are uncontrolled rod bank withdrawal at low power or full power, malfunction of boron dilution, misalignment of control rod withdrawal, malfunction of improper position of fuel assembly and ejection of control rod. The failure probability of reactivity control is 1.49E-03 per year. The causes of failures which are affected by human factor are boron dilution, misalignment of control rod withdrawal and malfunction of improper position for fuel assembly. Based on the assessment, it is concluded that the failure probability of reactivity control on the PWR is still within the IAEA criteria.

  16. Bidirectional Cardio-Respiratory Interactions in Heart Failure.

    Science.gov (United States)

    Radovanović, Nikola N; Pavlović, Siniša U; Milašinović, Goran; Kirćanski, Bratislav; Platiša, Mirjana M

    2018-01-01

    We investigated cardio-respiratory coupling in patients with heart failure by quantification of bidirectional interactions between cardiac (RR intervals) and respiratory signals with complementary measures of time series analysis. Heart failure patients were divided into three groups of twenty, age and gender matched, subjects: with sinus rhythm (HF-Sin), with sinus rhythm and ventricular extrasystoles (HF-VES), and with permanent atrial fibrillation (HF-AF). We included patients with indication for implantation of implantable cardioverter defibrillator or cardiac resynchronization therapy device. ECG and respiratory signals were simultaneously acquired during 20 min in supine position at spontaneous breathing frequency in 20 healthy control subjects and in patients before device implantation. We used coherence, Granger causality and cross-sample entropy analysis as complementary measures of bidirectional interactions between RR intervals and respiratory rhythm. In heart failure patients with arrhythmias (HF-VES and HF-AF) there is no coherence between signals ( p respiratory signal in HF-VES. Cross-sample entropy analysis revealed the strongest synchrony between respiratory and RR signal in HF-VES group. Beside respiratory sinus arrhythmia there is another type of cardio-respiratory interaction based on the synchrony between cardiac and respiratory rhythm. Both of them are altered in heart failure patients. Respiratory sinus arrhythmia is reduced in HF-Sin patients and vanished in heart failure patients with arrhythmias. Contrary, in HF-Sin and HF-VES groups, synchrony increased, probably as consequence of some dominant neural compensatory mechanisms. The coupling of cardiac and respiratory rhythm in heart failure patients varies depending on the presence of atrial/ventricular arrhythmias and it could be revealed by complementary methods of time series analysis.

  17. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waller, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1976-07-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure rate lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this report is to present a methodology that can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate lambda simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) equals 0.50 and P(lambda less than 1.0 x 10 -5 ) equals 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure rate percentiles illustrated above, it is possible to use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) equals 0.50 and P(R(t 0 ) less than 0.99999) equals 0.95, for some operating time t 0 . The report also includes graphs for selected percentiles which assist an engineer in applying the procedure. 28 figures, 16 tables

  18. Estimation of Extreme Responses and Failure Probability of Wind Turbines under Normal Operation by Controlled Monte Carlo Simulation

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri

    of the evolution of the PDF of a stochastic process; hence an alternative to the FPK. The considerable advantage of the introduced method over FPK is that its solution does not require high computational cost which extends its range of applicability to high order structural dynamic problems. The problem...... an alternative approach for estimation of the first excursion probability of any system is based on calculating the evolution of the Probability Density Function (PDF) of the process and integrating it on the specified domain. Clearly this provides the most accurate results among the three classes of the methods....... The solution of the Fokker-Planck-Kolmogorov (FPK) equation for systems governed by a stochastic differential equation driven by Gaussian white noise will give the sought time variation of the probability density function. However the analytical solution of the FPK is available for only a few dynamic systems...

  19. Screening for common copy-number variants in cancer genes.

    Science.gov (United States)

    Tyson, Jess; Majerus, Tamsin M O; Walker, Susan; Armour, John A L

    2010-12-01

    For most cases of colorectal cancer that arise without a family history of the disease, it is proposed that an appreciable heritable component of predisposition is the result of contributions from many loci. Although progress has been made in identifying single nucleotide variants associated with colorectal cancer risk, the involvement of low-penetrance copy number variants is relatively unexplored. We have used multiplex amplifiable probe hybridization (MAPH) in a fourfold multiplex (QuadMAPH), positioned at an average resolution of one probe per 2 kb, to screen a total of 1.56 Mb of genomic DNA for copy number variants around the genes APC, AXIN1, BRCA1, BRCA2, CTNNB1, HRAS, MLH1, MSH2, and TP53. Two deletion events were detected, one upstream of MLH1 in a control individual and the other in APC in a colorectal cancer patient, but these do not seem to correspond to copy number polymorphisms with measurably high population frequencies. In summary, by means of our QuadMAPH assay, copy number measurement data were of sufficient resolution and accuracy to detect any copy number variants with high probability. However, this study has demonstrated a very low incidence of deletion and duplication variants within intronic and flanking regions of these nine genes, in both control individuals and colorectal cancer patients. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Impact of Dual-Link Failures on Impairment-Aware Routed Networks

    DEFF Research Database (Denmark)

    Georgakilas, Konstantinos N; Katrinis, Kostas; Tzanakaki, Anna

    2010-01-01

    This paper evaluates the impact of dual-link failures on single-link failure resilient networks, while physical layer constraints are taken into consideration during demand routing, as dual link failures and equivalent situations appear to be quite probable in core optical networks. In particular...

  1. Evaluation of burst probability for tubes by Weibull distributions

    International Nuclear Information System (INIS)

    Kao, S.

    1975-10-01

    The investigations of candidate distributions that best describe the burst pressure failure probability characteristics of nuclear power steam generator tubes has been continued. To date it has been found that the Weibull distribution provides an acceptable fit for the available data from both the statistical and physical viewpoints. The reasons for the acceptability of the Weibull distribution are stated together with the results of tests for the suitability of fit. In exploring the acceptability of the Weibull distribution for the fitting, a graphical method to be called the ''density-gram'' is employed instead of the usual histogram. With this method a more sensible graphical observation on the empirical density may be made for cases where the available data is very limited. Based on these methods estimates of failure pressure are made for the left-tail probabilities

  2. Systemic risk in dynamical networks with stochastic failure criterion

    Science.gov (United States)

    Podobnik, B.; Horvatic, D.; Bertella, M. A.; Feng, L.; Huang, X.; Li, B.

    2014-06-01

    Complex non-linear interactions between banks and assets we model by two time-dependent Erdős-Renyi network models where each node, representing a bank, can invest either to a single asset (model I) or multiple assets (model II). We use a dynamical network approach to evaluate the collective financial failure —systemic risk— quantified by the fraction of active nodes. The systemic risk can be calculated over any future time period, divided into sub-periods, where within each sub-period banks may contiguously fail due to links to either i) assets or ii) other banks, controlled by two parameters, probability of internal failure p and threshold Th (“solvency” parameter). The systemic risk decreases with the average network degree faster when all assets are equally distributed across banks than if assets are randomly distributed. The more inactive banks each bank can sustain (smaller Th), the smaller the systemic risk —for some Th values in I we report a discontinuity in systemic risk. When contiguous spreading becomes stochastic ii) controlled by probability p2 —a condition for the bank to be solvent (active) is stochastic— the systemic risk decreases with decreasing p2. We analyse the asset allocation for the U.S. banks.

  3. Employment status at time of first hospitalization for heart failure is associated with a higher risk of death and rehospitalization for heart failure

    DEFF Research Database (Denmark)

    Rørth, Rasmus; Fosbøl, Emil L; Mogensen, Ulrik M

    2018-01-01

    AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization in a nation......AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization...

  4. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    Science.gov (United States)

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  5. Relationship between Future Time Orientation and Item Nonresponse on Subjective Probability Questions: A Cross-Cultural Analysis.

    Science.gov (United States)

    Lee, Sunghee; Liu, Mingnan; Hu, Mengyao

    2017-06-01

    Time orientation is an unconscious yet fundamental cognitive process that provides a framework for organizing personal experiences in temporal categories of past, present and future, reflecting the relative emphasis given to these categories. Culture lies central to individuals' time orientation, leading to cultural variations in time orientation. For example, people from future-oriented cultures tend to emphasize the future and store information relevant for the future more than those from present- or past-oriented cultures. For survey questions that ask respondents to report expected probabilities of future events, this may translate into culture-specific question difficulties, manifested through systematically varying "I don't know" item nonresponse rates. This study drew on the time orientation theory and examined culture-specific nonresponse patterns on subjective probability questions using methodologically comparable population-based surveys from multiple countries. The results supported our hypothesis. Item nonresponse rates on these questions varied significantly in the way that future-orientation at the group as well as individual level was associated with lower nonresponse rates. This pattern did not apply to non-probability questions. Our study also suggested potential nonresponse bias. Examining culture-specific constructs, such as time orientation, as a framework for measurement mechanisms may contribute to improving cross-cultural research.

  6. A new method for evaluating the availability, reliability, and maintainability whatever may be the probability law

    International Nuclear Information System (INIS)

    Doyon, L.R.; CEA Centre d'Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette

    1975-01-01

    A simple method is presented for computer solving every system model (availability, reliability, and maintenance) with intervals between failures, and time duration for repairs distributed according to any probability law, and for any maintainance policy. A matrix equation is obtained using Markov diagrams. An example is given with the solution by the APAFS program (Algorithme Pour l'Analyse de la Fiabilite des Systemes) [fr

  7. Prediction of the time-dependent failure rate for normally operating components taking into account the operational history

    International Nuclear Information System (INIS)

    Vrbanic, I.; Simic, Z.; Sljivac, D.

    2008-01-01

    The prediction of the time-dependent failure rate has been studied, taking into account the operational history of a component used in applications such as system modeling in a probabilistic safety analysis in order to evaluate the impact of equipment aging and maintenance strategies on the risk measures considered. We have selected a time-dependent model for the failure rate which is based on the Weibull distribution and the principles of proportional age reduction by equipment overhauls. Estimation of the parameters that determine the failure rate is considered, including the definition of the operational history model and likelihood function for the Bayesian analysis of parameters for normally operating repairable components. The operational history is provided as a time axis with defined times of overhauls and failures. An example for demonstration is described with prediction of the future behavior for seven different operational histories. (orig.)

  8. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    International Nuclear Information System (INIS)

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed approach without considering the shared signals and the various complex logic operation. Meanwhile, CCF can also arise in the computing process.

  9. The modulation of simple reaction time by the spatial probability of a visual stimulus

    Directory of Open Access Journals (Sweden)

    Carreiro L.R.R.

    2003-01-01

    Full Text Available Simple reaction time (SRT in response to visual stimuli can be influenced by many stimulus features. The speed and accuracy with which observers respond to a visual stimulus may be improved by prior knowledge about the stimulus location, which can be obtained by manipulating the spatial probability of the stimulus. However, when higher spatial probability is achieved by holding constant the stimulus location throughout successive trials, the resulting improvement in performance can also be due to local sensory facilitation caused by the recurrent spatial location of a visual target (position priming. The main objective of the present investigation was to quantitatively evaluate the modulation of SRT by the spatial probability structure of a visual stimulus. In two experiments the volunteers had to respond as quickly as possible to the visual target presented on a computer screen by pressing an optic key with the index finger of the dominant hand. Experiment 1 (N = 14 investigated how SRT changed as a function of both the different levels of spatial probability and the subject's explicit knowledge about the precise probability structure of visual stimulation. We found a gradual decrease in SRT with increasing spatial probability of a visual target regardless of the observer's previous knowledge concerning the spatial probability of the stimulus. Error rates, below 2%, were independent of the spatial probability structure of the visual stimulus, suggesting the absence of a speed-accuracy trade-off. Experiment 2 (N = 12 examined whether changes in SRT in response to a spatially recurrent visual target might be accounted for simply by sensory and temporally local facilitation. The findings indicated that the decrease in SRT brought about by a spatially recurrent target was associated with its spatial predictability, and could not be accounted for solely in terms of sensory priming.

  10. Two viewpoints for software failures and their relation in probabilistic safety assessment of digital instrumentation and control systems

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2015-01-01

    As the use of digital systems in nuclear power plants increases, the reliability of the software becomes one of the important issues in probabilistic safety assessment. In this paper, two viewpoints for a software failure during the operation of a digital system or a statistical software test are identified, and the relation between them is provided. In conventional software reliability analysis, a failure is mainly viewed with respect to the system operation. A new viewpoint with respect to the system input is suggested. The failure probability density functions for the two viewpoints are defined, and the relation between the two failure probability density functions is derived. Each failure probability density function can be derived from the other failure probability density function by applying the derived relation between the two failure probability density functions. The usefulness of the derived relation is demonstrated by applying it to the failure data obtained from the software testing of a real system. The two viewpoints and their relation, as identified in this paper, are expected to help us extend our understanding of the reliability of safety-critical software. (author)

  11. Probability distribution of wave packet delay time for strong overlapping of resonance levels

    International Nuclear Information System (INIS)

    Lyuboshits, V.L.

    1983-01-01

    Time behaviour of nuclear reactions in the case of high level densities is investigated basing on the theory of overlapping resonances. In the framework of a model of n equivalent channels an analytical expression is obtained for the probability distribution function for wave packet delay time at the compound nucleus production. It is shown that at strong overlapping of the resonance levels the relative fluctuation of the delay time is small at the stage of compound nucleus production. A possible increase in the duration of nuclear reactions with the excitation energy rise is discussed

  12. Development of a subway operation incident delay model using accelerated failure time approaches.

    Science.gov (United States)

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Transmission of single and multiple viral variants in primary HIV-1 subtype C infection.

    Directory of Open Access Journals (Sweden)

    Vladimir Novitsky

    2011-02-01

    Full Text Available To address whether sequences of viral gag and env quasispecies collected during the early post-acute period can be utilized to determine multiplicity of transmitted HIV's, recently developed approaches for analysis of viral evolution in acute HIV-1 infection [1,2] were applied. Specifically, phylogenetic reconstruction, inter- and intra-patient distribution of maximum and mean genetic distances, analysis of Poisson fitness, shape of highlighter plots, recombination analysis, and estimation of time to the most recent common ancestor (tMRCA were utilized for resolving multiplicity of HIV-1 transmission in a set of viral quasispecies collected within 50 days post-seroconversion (p/s in 25 HIV-infected individuals with estimated time of seroconversion. The decision on multiplicity of HIV infection was made based on the model's fit with, or failure to explain, the observed extent of viral sequence heterogeneity. The initial analysis was based on phylogeny, inter-patient distribution of maximum and mean distances, and Poisson fitness, and was able to resolve multiplicity of HIV transmission in 20 of 25 (80% cases. Additional analysis involved distribution of individual viral distances, highlighter plots, recombination analysis, and estimation of tMRCA, and resolved 4 of the 5 remaining cases. Overall, transmission of a single viral variant was identified in 16 of 25 (64% cases, and transmission of multiple variants was evident in 8 of 25 (32% cases. In one case multiplicity of HIV-1 transmission could not be determined. In primary HIV-1 subtype C infection, samples collected within 50 days p/s and analyzed by a single-genome amplification/sequencing technique can provide reliable identification of transmission multiplicity in 24 of 25 (96% cases. Observed transmission frequency of a single viral variant and multiple viral variants were within the ranges of 64% to 68%, and 32% to 36%, respectively.

  14. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T [The Cleveland Clinic Foundation, Cleveland, OH (United States)

    2016-06-15

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  15. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    International Nuclear Information System (INIS)

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T

    2016-01-01

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  16. Expert Performance and Time Pressure: Implications for Automation Failures in Aviation

    Science.gov (United States)

    2016-09-30

    settled by these two studies. To help resolve the disagreement between the previous research findings, the present work used a computerized chess...communication between the automation and the pilots should also be helpful , but it is doubtful that the system designer or the real-time automation can...Performance and Time Pressure: Implications for Automation Failures in Aviation 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  17. Time course of recovery following resistance training leading or not to failure.

    Science.gov (United States)

    Morán-Navarro, Ricardo; Pérez, Carlos E; Mora-Rodríguez, Ricardo; de la Cruz-Sánchez, Ernesto; González-Badillo, Juan José; Sánchez-Medina, Luis; Pallarés, Jesús G

    2017-12-01

    To describe the acute and delayed time course of recovery following resistance training (RT) protocols differing in the number of repetitions (R) performed in each set (S) out of the maximum possible number (P). Ten resistance-trained men undertook three RT protocols [S × R(P)]: (1) 3 × 5(10), (2) 6 × 5(10), and (3) 3 × 10(10) in the bench press (BP) and full squat (SQ) exercises. Selected mechanical and biochemical variables were assessed at seven time points (from - 12 h to + 72 h post-exercise). Countermovement jump height (CMJ) and movement velocity against the load that elicited a 1 m s -1 mean propulsive velocity (V1) and 75% 1RM in the BP and SQ were used as mechanical indicators of neuromuscular performance. Training to muscle failure in each set [3 × 10(10)], even when compared to completing the same total exercise volume [6 × 5(10)], resulted in a significantly higher acute decline of CMJ and velocity against the V1 and 75% 1RM loads in both BP and SQ. In contrast, recovery from the 3 × 5(10) and 6 × 5(10) protocols was significantly faster between 24 and 48 h post-exercise compared to 3 × 10(10). Markers of acute (ammonia, growth hormone) and delayed (creatine kinase) fatigue showed a markedly different course of recovery between protocols, suggesting that training to failure slows down recovery up to 24-48 h post-exercise. RT leading to failure considerably increases the time needed for the recovery of neuromuscular function and metabolic and hormonal homeostasis. Avoiding failure would allow athletes to be in a better neuromuscular condition to undertake a new training session or competition in a shorter period of time.

  18. Factor VII deficiency: a novel missense variant and genotype-phenotype correlation in patients from Southern Italy.

    Science.gov (United States)

    Tiscia, Giovanni; Favuzzi, Giovanni; Chinni, Elena; Colaizzo, Donatella; Fischetti, Lucia; Intrieri, Mariano; Margaglione, Maurizio; Grandone, Elvira

    2017-01-01

    This study aimed at attempting to correlate genotype and phenotype in factor VII deficiency. Here, we present molecular and clinical findings of 10 patients with factor VII deficiency. From 2013 to 2016, 10 subjects were referred to our center because of a prolonged prothrombin time identified during routine or presurgery examinations or after a laboratory assessment of a bleeding episode. Mutation characterization was performed using the bioinformatics applications PROMO, SIFT, and Polyphen-2. Structural changes in the factor VII protein were analyzed using the SPDB viewer tool. Of the 10 variants we identified, 1 was responsible for a novel missense change (c.1199G>C, p.Cys400Ser); in 2 cases we identified the c.-54G>A and c.509G>A (p.Arg170His) polymorphic variants in the 5'-upstream region of the factor VII gene and exon 6, respectively. To our knowledge, neither of these polymorphic variants has been described previously in factor VII-deficient patients. In silico predictions showed differences in binding sites for transcription factors caused by the c.-54G>A variant and a probable damaging effect of the p.Cys400Ser missense change on factor VII active conformation, leading to breaking of the Cys400-Cys428 disulfide bridge. Our findings further suggest that, independently of factor VII levels and of variants potentially affecting factor VII levels, environmental factors, e.g., trauma, could heavily influence the clinical phenotype of factor VII-deficient patients.

  19. A physical probabilistic model to predict failure rates in buried PVC pipelines

    International Nuclear Information System (INIS)

    Davis, P.; Burn, S.; Moglia, M.; Gould, S.

    2007-01-01

    For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical

  20. Detection probabilities for time-domain velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1991-01-01

    programs, it is demonstrated that the probability of correct estimation depends on the signal-to-noise ratio, transducer bandwidth, number of A-lines and number of samples used in the correlation estimate. The influence of applying a stationary echo-canceler is explained. The echo canceling can be modeled...

  1. Piping failures in United States nuclear power plants 1961-1995

    International Nuclear Information System (INIS)

    Bush, S.H.; Do, M.J.; Slavich, A.L.; Chockie, A.D.

    1996-01-01

    Over 1500 reported piping failures were identified and summarized based on an extensive review of tens of thousands of event reports that have been submitted to the US regulatory agencies over the last 35 years. The data base contains only piping failures; failures in vessels, pumps, valves and steam generators or any cracks that were not through-wall are not included. It was observed that there has been a marked decrease in the number of failures after 1983 for almost all sizes of pipes. This is likely due to the changes in the reporting requirements at that time and the corrective actions taken by utilities to minimize fatigue failures of small lines and IGSCC in BWRs. One failure mechanism that continues to occur is erosion-corrosion, which accounts for most of the ruptures reported and probably is responsible for the absence of downward trends in ruptures. Fatigue-vibration is also a significant contributor to piping failures. However, most of such events occur in lines approx. one inch or less in diameter. Together, erosion-corrosion and fatigue-vibration account for over 43 per cent of the failures. The overwhelming majority of failures have been leaks, over half the failures occurred in pipes with a diameter of one inch or less. Included in the report is a listing of the number of welds in various systems in LWRs

  2. Interference Cancellation Using Replica Signal for HTRCI-MIMO/OFDM in Time-Variant Large Delay Spread Longer Than Guard Interval

    Directory of Open Access Journals (Sweden)

    Yuta Ida

    2012-01-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM and multiple-input multiple-output (MIMO are generally known as the effective techniques for high data rate services. In MIMO/OFDM systems, the channel estimation (CE is very important to obtain an accurate channel state information (CSI. However, since the orthogonal pilot-based CE requires the large number of pilot symbols, the total transmission rate is degraded. To mitigate this problem, a high time resolution carrier interferometry (HTRCI for MIMO/OFDM has been proposed. In wireless communication systems, if the maximum delay spread is longer than the guard interval (GI, the system performance is significantly degraded due to the intersymbol interference (ISI and intercarrier interference (ICI. However, the conventional HTRCI-MIMO/OFDM does not consider the case with the time-variant large delay spread longer than the GI. In this paper, we propose the ISI and ICI compensation methods for a HTRCI-MIMO/OFDM in the time-variant large delay spread longer than the GI.

  3. Quantitative functional failure analysis of a thermal-hydraulic passive system by means of bootstrapped Artificial Neural Networks

    International Nuclear Information System (INIS)

    Zio, E.; Apostolakis, G.E.; Pedroni, N.

    2010-01-01

    The estimation of the functional failure probability of a thermal-hydraulic (T-H) passive system can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the system model and the numerical values of its parameters, followed by the computation of the system response by a mechanistic T-H code, for each sample. The computational effort associated to this approach can be prohibitive because a large number of lengthy T-H code simulations must be performed (one for each sample) for accurate quantification of the functional failure probability and the related statistics. In this paper, the computational burden is reduced by replacing the long-running, original T-H code by a fast-running, empirical regression model: in particular, an Artificial Neural Network (ANN) model is considered. It is constructed on the basis of a limited-size set of data representing examples of the input/output nonlinear relationships underlying the original T-H code; once the model is built, it is used for performing, in an acceptable computational time, the numerous system response calculations needed for an accurate failure probability estimation, uncertainty propagation and sensitivity analysis. The empirical approximation of the system response provided by the ANN model introduces an additional source of (model) uncertainty, which needs to be evaluated and accounted for. A bootstrapped ensemble of ANN regression models is here built for quantifying, in terms of confidence intervals, the (model) uncertainties associated with the estimates provided by the ANNs. For demonstration purposes, an application to the functional failure analysis of an emergency passive decay heat removal system in a simple steady-state model of a Gas-cooled Fast Reactor (GFR) is presented. The functional failure probability of the system is estimated together with global Sobol sensitivity indices. The bootstrapped ANN regression model built with low computational time on few (e.g., 100) data

  4. Quantitative functional failure analysis of a thermal-hydraulic passive system by means of bootstrapped Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Zio, E., E-mail: enrico.zio@polimi.i [Energy Department, Politecnico di Milano, Via Ponzio 34/3, 20133 Milan (Italy); Apostolakis, G.E., E-mail: apostola@mit.ed [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307 (United States); Pedroni, N. [Energy Department, Politecnico di Milano, Via Ponzio 34/3, 20133 Milan (Italy)

    2010-05-15

    The estimation of the functional failure probability of a thermal-hydraulic (T-H) passive system can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the system model and the numerical values of its parameters, followed by the computation of the system response by a mechanistic T-H code, for each sample. The computational effort associated to this approach can be prohibitive because a large number of lengthy T-H code simulations must be performed (one for each sample) for accurate quantification of the functional failure probability and the related statistics. In this paper, the computational burden is reduced by replacing the long-running, original T-H code by a fast-running, empirical regression model: in particular, an Artificial Neural Network (ANN) model is considered. It is constructed on the basis of a limited-size set of data representing examples of the input/output nonlinear relationships underlying the original T-H code; once the model is built, it is used for performing, in an acceptable computational time, the numerous system response calculations needed for an accurate failure probability estimation, uncertainty propagation and sensitivity analysis. The empirical approximation of the system response provided by the ANN model introduces an additional source of (model) uncertainty, which needs to be evaluated and accounted for. A bootstrapped ensemble of ANN regression models is here built for quantifying, in terms of confidence intervals, the (model) uncertainties associated with the estimates provided by the ANNs. For demonstration purposes, an application to the functional failure analysis of an emergency passive decay heat removal system in a simple steady-state model of a Gas-cooled Fast Reactor (GFR) is presented. The functional failure probability of the system is estimated together with global Sobol sensitivity indices. The bootstrapped ANN regression model built with low computational time on few (e.g., 100) data

  5. The self-adaptation to dynamic failures for efficient virtual organization formations in grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. However, due to the nature of heterogeneous and dynamic resources, dynamic failures in the distributed grid environment usually occur more than in traditional computation platforms, which cause failed VO formations. In this paper, we develop a novel self-adaptive mechanism to dynamic failures during VO formations. Such a self-adaptive scheme allows an individual and member of VOs to automatically find other available or replaceable one once a failure happens and therefore makes systems automatically recover from dynamic failures. We define dynamic failure situations of a system by using two standard indicators: mean time between failures (MTBF) and mean time to recover (MTTR). We model both MTBF and MTTR as Poisson distributions. We investigate and analyze the efficiency of the proposed self-adaptation mechanism to dynamic failures by comparing the success probability of VO formations before and after adopting it in three different cases: (1) different failure situations; (2) different organizational structures and scales; (3) different task complexities. The experimental results show that the proposed scheme can automatically adapt to dynamic failures and effectively improve the dynamic VO formation performance in the event of node failures, which provide a valuable addition to the field.

  6. OL-DEC-MDP Model for Multiagent Online Scheduling with a Time-Dependent Probability of Success

    Directory of Open Access Journals (Sweden)

    Cheng Zhu

    2014-01-01

    Full Text Available Focusing on the on-line multiagent scheduling problem, this paper considers the time-dependent probability of success and processing duration and proposes an OL-DEC-MDP (opportunity loss-decentralized Markov Decision Processes model to include opportunity loss into scheduling decision to improve overall performance. The success probability of job processing as well as the process duration is dependent on the time at which the processing is started. The probability of completing the assigned job by an agent would be higher when the process is started earlier, but the opportunity loss could also be high due to the longer engaging duration. As a result, OL-DEC-MDP model introduces a reward function considering the opportunity loss, which is estimated based on the prediction of the upcoming jobs by a sampling method on the job arrival. Heuristic strategies are introduced in computing the best starting time for an incoming job by each agent, and an incoming job will always be scheduled to the agent with the highest reward among all agents with their best starting policies. The simulation experiments show that the OL-DEC-MDP model will improve the overall scheduling performance compared with models not considering opportunity loss in heavy-loading environment.

  7. Irreversibility and conditional probability

    International Nuclear Information System (INIS)

    Stuart, C.I.J.M.

    1989-01-01

    The mathematical entropy - unlike physical entropy - is simply a measure of uniformity for probability distributions in general. So understood, conditional entropies have the same logical structure as conditional probabilities. If, as is sometimes supposed, conditional probabilities are time-reversible, then so are conditional entropies and, paradoxically, both then share this symmetry with physical equations of motion. The paradox is, of course that probabilities yield a direction to time both in statistical mechanics and quantum mechanics, while the equations of motion do not. The supposed time-reversibility of both conditionals seems also to involve a form of retrocausality that is related to, but possibly not the same as, that described by Costa de Beaurgard. The retrocausality is paradoxically at odds with the generally presumed irreversibility of the quantum mechanical measurement process. Further paradox emerges if the supposed time-reversibility of the conditionals is linked with the idea that the thermodynamic entropy is the same thing as 'missing information' since this confounds the thermodynamic and mathematical entropies. However, it is shown that irreversibility is a formal consequence of conditional entropies and, hence, of conditional probabilities also. 8 refs. (Author)

  8. Parameters governing the failure of steel components

    International Nuclear Information System (INIS)

    Schmitt, W.

    1977-01-01

    The most important feature of any component is the ability to carry safely the load it is designed for. The strength of the component is influenced mainly by three groups of parameters: 1. The loading of the structure; Here the possible loading cases are: normal operation, testing, emergency and faulted conditions; the kinds of loading can be divided into: internal pressure, external forces and moments, temperature loading. 2. The defects in the structure: cavities and inclusions, pores, flaws or cracks. 3. The material properties: Young's modulus, Yield - and ultimate strength, absorbed charpy energy, fracture toughness, etc. For different failure modes one has to take into account different material properties, the loading and the defects are assumed to be within certain deterministic bounds, from which deterministic safety factors can be determined with respect to any failure mode and failure criterion. However, since all parameters have a certain scatter about a mean value, there is a probability to exceed the given bounds. From the extrapolation of the distribution a value for the failure probability can be estimated. (orig.) [de

  9. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  10. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  11. The use of lifetime functions in the optimization of interventions on existing bridges considering maintenance and failure costs

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Seung-Ie [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: yangsione@dreamwiz.com; Frangopol, Dan M. [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: dan.frangopol@colorado.edu; Kawakami, Yoriko [Hanshin Expressway Public Corporation, Kobe Maintenance Department, 16-1 Shinko-cho Chuo-ku Kobe City, Hyogo, 650-0041 (Japan)]. E-mail: yoriko-kawakami@hepc.go.jp; Neves, Luis C. [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: lneves@civil.uminho.pt

    2006-06-15

    In the last decade, it became clear that life-cycle cost analysis of existing civil infrastructure must be used to optimally manage the growing number of aging and deteriorating structures. The uncertainties associated with deteriorating structures require the use of probabilistic methods to properly evaluate their lifetime performance. In this paper, the deterioration and the effect of maintenance actions are analyzed considering the performance of existing structures characterized by lifetime functions. These functions allow, in a simple manner, the consideration of the effect of aging on the decrease of the probability of survival of a structure, as well as the effect of maintenance actions. Models for the effects of proactive and reactive preventive maintenance, and essential maintenance actions are presented. Since the probability of failure is different from zero during the entire service life of a deteriorating structure and depends strongly on the maintenance strategy, the cost of failure is included in this analysis. The failure of one component in a structure does not usually lead to failure of the structure and, as a result, the safety of existing structures must be analyzed using a system reliability framework. The optimization consists of minimizing the sum of the cumulative maintenance and expected failure cost during the prescribed time horizon. Two examples of application of the proposed methodology are presented. In the first example, the sum of the maintenance and failure costs of a bridge in Colorado is minimized considering essential maintenance only and a fixed minimum acceptable probability of failure. In the second example, the expected lifetime cost, including maintenance and expected failure costs, of a multi-girder bridge is minimized considering reactive preventive maintenance actions.

  12. The use of lifetime functions in the optimization of interventions on existing bridges considering maintenance and failure costs

    International Nuclear Information System (INIS)

    Yang, Seung-Ie; Frangopol, Dan M.; Kawakami, Yoriko; Neves, Luis C.

    2006-01-01

    In the last decade, it became clear that life-cycle cost analysis of existing civil infrastructure must be used to optimally manage the growing number of aging and deteriorating structures. The uncertainties associated with deteriorating structures require the use of probabilistic methods to properly evaluate their lifetime performance. In this paper, the deterioration and the effect of maintenance actions are analyzed considering the performance of existing structures characterized by lifetime functions. These functions allow, in a simple manner, the consideration of the effect of aging on the decrease of the probability of survival of a structure, as well as the effect of maintenance actions. Models for the effects of proactive and reactive preventive maintenance, and essential maintenance actions are presented. Since the probability of failure is different from zero during the entire service life of a deteriorating structure and depends strongly on the maintenance strategy, the cost of failure is included in this analysis. The failure of one component in a structure does not usually lead to failure of the structure and, as a result, the safety of existing structures must be analyzed using a system reliability framework. The optimization consists of minimizing the sum of the cumulative maintenance and expected failure cost during the prescribed time horizon. Two examples of application of the proposed methodology are presented. In the first example, the sum of the maintenance and failure costs of a bridge in Colorado is minimized considering essential maintenance only and a fixed minimum acceptable probability of failure. In the second example, the expected lifetime cost, including maintenance and expected failure costs, of a multi-girder bridge is minimized considering reactive preventive maintenance actions

  13. How would a decline in sperm concentration over time influence the probability of pregnancy?

    DEFF Research Database (Denmark)

    Slama, R.; Jensen, Tina Kold; Scheike, T.

    2004-01-01

    BACKGROUND: Reports have suggested a decline in sperm concentration during the second half of the 20th century. The effect of this decline on fecundability (the monthly probability of pregnancy) could be detected in principle by a study of time to pregnancy. In practice, the amplitude of this exp......BACKGROUND: Reports have suggested a decline in sperm concentration during the second half of the 20th century. The effect of this decline on fecundability (the monthly probability of pregnancy) could be detected in principle by a study of time to pregnancy. In practice, the amplitude...... of this expected effect is not well known and the statistical power of time-to-pregnancy studies to detect it has not been explored. METHODS: We developed a nonparametric model to describe a temporal decline in sperm concentration using data on French semen donors. We then applied this model to 419 Danish couples...... planning a first pregnancy in 1992, to predict their time to pregnancy as if the pregnancy attempt had begun during earlier decades with higher sperm concentrations. Finally, we used bootstrap simulations to estimate the statistical power of prospective or retrospective studies that compared fecundability...

  14. rpsftm: An R Package for Rank Preserving Structural Failure Time Models.

    Science.gov (United States)

    Allison, Annabel; White, Ian R; Bond, Simon

    2017-12-04

    Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due to Robins and Tsiatis (1991) and has been developed by White et al. (1997, 1999). The method is randomisation based and uses only the randomised treatment group, observed event times, and treatment history in order to estimate a causal treatment effect. The treatment effect, ψ , is estimated by balancing counter-factual event times (that would be observed if no treatment were received) between treatment groups. G-estimation is used to find the value of ψ such that a test statistic Z ( ψ ) = 0. This is usually the test statistic used in the intention-to-treat analysis, for example, the log rank test statistic. We present an R package that implements the method of rpsftm.

  15. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    Science.gov (United States)

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  16. A multiple shock model for common cause failures using discrete Markov chain

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Kang, Chang Soon

    1992-01-01

    The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis

  17. Integration of 60 000 exomes and ACMG guidelines question the role of Catecholaminergic Polymorphic Ventricular Tachycardia associated variants

    DEFF Research Database (Denmark)

    Paludan-Müller, Christian; Ahlberg, Gustav; Ghouse, Jonas

    2017-01-01

    of potential false-positive pathogenic variants was conducted by searching The Exome Aggregation Consortium (ExAC) database (n=60 706) for variants reported to be associated with CPVT. The pathogenicity of the interrogated variants was assessed using guidelines from the American College of Medical Genetics...... and Genomics (ACMG) and in silico prediction tools. Thirty-eight out of 246 variants (15%) previously associated with CPVT were identified in the ExAC database. We predicted the CPVT prevalence to be 1:132. The ACMG standards classified 29% of ExAC variants as pathogenic or likely pathogenic. The in silico...... predictions showed a reduced probability of disease-causing effect for the variants identified in the exome database (P˂0.001). We have observed a large overrepresentation of previously CPVT associated variants in a large exome database. Based on the frequency of CPVT in the general population, it is less...

  18. Noise and signal processing in a microstrip detector with a time variant readout system

    International Nuclear Information System (INIS)

    Cattaneo, P.W.

    1995-01-01

    This paper treats the noise and signal processing by a time variant filter in a microstrip detector. In particular, the noise sources in the detector-electronics chain and the signal losses that cause a substantial decrease of the original signal are thoroughly analyzed. This work has been motivated by the analysis of the data of the microstrip detectors designed for the ALEPH minivertex detector. Hence, even if the discussion will be kept as general as possible, concrete examples will be presented referring to the specific ALEPH design. (orig.)

  19. An unreliable group arrival queue with k stages of service, retrial under variant vacation policy

    Science.gov (United States)

    Radha, J.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    In this research work we considered repairable retrial queue with group arrival and the server utilize the variant vacations. A server gives service in k stages. Any arriving group of units finds the server free, one from the group entering the first stage of service and the rest are joining into the orbit. After completion of the i th stage of service, the customer may have the option to choose (i+1)th stage of service with probability θi , with probability pi may join into orbit as feedback customer or may leave the system with probability {q}i=≤ft\\{\\begin{array}{l}1-{p}i-{θ }i,i=1,2,\\cdots k-1\\ 1-{p}i,i=k\\end{array}\\right\\}. If the orbit is empty at the service completion of each stage service, the server takes modified vacation until at least one customer appears in the orbit on the server returns from a vacation. Busy server may get to breakdown and the service channel will fail for a short interval of time. By using the supplementary variable method, steady state probability generating function for system size, some system performance measures are discussed.

  20. Evidence that multiple genetic variants of MC4R play a functional role in the regulation of energy expenditure and appetite in Hispanic children1234

    Science.gov (United States)

    Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A

    2010-01-01

    Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825

  1. Accurate genotyping across variant classes and lengths using variant graphs

    DEFF Research Database (Denmark)

    Sibbesen, Jonas Andreas; Maretty, Lasse; Jensen, Jacob Malte

    2018-01-01

    of read k-mers to a graph representation of the reference and variants to efficiently perform unbiased, probabilistic genotyping across the variation spectrum. We demonstrate that BayesTyper generally provides superior variant sensitivity and genotyping accuracy relative to existing methods when used...... collecting a set of candidate variants across discovery methods, individuals and databases, and then realigning the reads to the variants and reference simultaneously. However, this realignment problem has proved computationally difficult. Here, we present a new method (BayesTyper) that uses exact alignment...... to integrate variants across discovery approaches and individuals. Finally, we demonstrate that including a ‘variation-prior’ database containing already known variants significantly improves sensitivity....

  2. Effects of the combined action of axial and transversal loads on the failure time of a wooden beam under fire

    International Nuclear Information System (INIS)

    Nubissie, A.; Kingne Talla, E.; Woafo, P.

    2012-01-01

    Highlights: ► A wooden beam submitted to fire and axial and transversal loads is considered. ► The failure time is found to increase with the intensity of the loads. ► Implication for safety consideration is indicated. -- Abstract: This paper presents the variations of the failure time of a wooden beam (Baillonella toxisperma also called Moabi) in fire subjected to the combined effect of axial and transversal loads. Using the recommendation of the structural Eurocodes that the failure can occur when the deflection attains 1/300 of the length of the beam or when the bending moment attains the resistant moment, the partial differential equation describing the beam dynamics is solved numerically and the failure time calculated. It is found that the failure time decreases when either the axial or transversal loads increases.

  3. Right Limbic FDG-PET Hypometabolism Correlates with Emotion Recognition and Attribution in Probable Behavioral Variant of Frontotemporal Dementia Patients.

    Directory of Open Access Journals (Sweden)

    Chiara Cerami

    Full Text Available The behavioural variant of frontotemporal dementia (bvFTD is a rare disease mainly affecting the social brain. FDG-PET fronto-temporal hypometabolism is a supportive feature for the diagnosis. It may also provide specific functional metabolic signatures for altered socio-emotional processing. In this study, we evaluated the emotion recognition and attribution deficits and FDG-PET cerebral metabolic patterns at the group and individual levels in a sample of sporadic bvFTD patients, exploring the cognitive-functional correlations. Seventeen probable mild bvFTD patients (10 male and 7 female; age 67.8±9.9 were administered standardized and validated version of social cognition tasks assessing the recognition of basic emotions and the attribution of emotions and intentions (i.e., Ekman 60-Faces test-Ek60F and Story-based Empathy task-SET. FDG-PET was analysed using an optimized voxel-based SPM method at the single-subject and group levels. Severe deficits of emotion recognition and processing characterized the bvFTD condition. At the group level, metabolic dysfunction in the right amygdala, temporal pole, and middle cingulate cortex was highly correlated to the emotional recognition and attribution performances. At the single-subject level, however, heterogeneous impairments of social cognition tasks emerged, and different metabolic patterns, involving limbic structures and prefrontal cortices, were also observed. The derangement of a right limbic network is associated with altered socio-emotional processing in bvFTD patients, but different hypometabolic FDG-PET patterns and heterogeneous performances on social tasks at an individual level exist.

  4. A ghrelin gene variant may predict crossover rate from restricting-type anorexia nervosa to other phenotypes of eating disorders: a retrospective survival analysis.

    Science.gov (United States)

    Ando, Tetsuya; Komaki, Gen; Nishimura, Hiroki; Naruo, Tetsuro; Okabe, Kenjiro; Kawai, Keisuke; Takii, Masato; Oka, Takakazu; Kodama, Naoki; Nakamoto, Chiemi; Ishikawa, Toshio; Suzuki-Hotta, Mari; Minatozaki, Kazunori; Yamaguchi, Chikara; Nishizono-Maher, Aya; Kono, Masaki; Kajiwara, Sohei; Suematsu, Hiroyuki; Tomita, Yuichiro; Ebana, Shoichi; Okamoto, Yuri; Nagata, Katsutaro; Nakai, Yoshikatsu; Koide, Masanori; Kobayashi, Nobuyuki; Kurokawa, Nobuo; Nagata, Toshihiko; Kiriike, Nobuo; Takenaka, Yoshito; Nagamine, Kiyohide; Ookuma, Kazuyoshi; Murata, Shiho

    2010-08-01

    Patients with anorexia nervosa restricting type (AN-R) often develop bulimic symptoms and crossover to AN-binge eating/purging type (AN-BP), or to bulimia nervosa (BN). We have reported earlier that genetic variants of an orexigenic peptide ghrelin are associated with BN. Here, the relationship between a ghrelin gene variant and the rate of change from AN-R to other phenotypes of eating disorders (EDs) was investigated. Participants were 165 patients with ED, initially diagnosed as AN-R. The dates of their AN-R onset and changes in diagnosis to other subtypes of ED were investigated retrospectively. Ghrelin gene 3056 T-->C SNP (single nucleotide polymorphism) was genotyped. Probability and hazard ratios were analyzed using life table analysis and Cox's proportional hazard regression model, in which the starting point was the time of AN-R onset and the outcome events were the time of (i) onset of binge eating, that is, when patients changed to binge eating AN and BN and (ii) recovery of normal weight, that is, when patients changed to BN or remission. Patients with the TT genotype at 3056 T-->C had a higher probability and hazard ratio for recovery of normal weight. The ghrelin SNP was not related with the onset of binge eating. The 3056 T-->C SNP of the ghrelin gene is related to the probability and the rate of recovery of normal body weight from restricting-type AN.

  5. Probability problems in seismic risk analysis and load combinations for nuclear power plants

    International Nuclear Information System (INIS)

    George, L.L.

    1983-01-01

    This workshop describes some probability problems in power plant reliability and maintenance analysis. The problems are seismic risk analysis, loss of load probability, load combinations, and load sharing. The seismic risk problem is to compute power plant reliability given an earthquake and the resulting risk. Component survival occurs if its peak random response to the earthquake does not exceed its strength. Power plant survival is a complicated Boolean function of component failures and survivals. The responses and strengths of components are dependent random processes, and the peak responses are maxima of random processes. The resulting risk is the expected cost of power plant failure

  6. A Novel RSPF Approach to Prediction of High-Risk, Low-Probability Failure Events

    Data.gov (United States)

    National Aeronautics and Space Administration — Particle filters (PF) have been established as the de facto state of the art in failure prognosis, and particularly in the representation and management of...

  7. Detection of S-gene 'a' determinant variants in hepatitis B patients with both positive HBsAg and HBsAb markers

    International Nuclear Information System (INIS)

    Wu Yueping; Ling Yongwu; Huang Songping; Wang Shipeng; Chen Yufeng; Mao Liping; Lu Jianrong

    2005-01-01

    Objective: To explore the S-gene 'a' determinant variants in hepatitis B patients with both positive HBsAg and HBsAb markers and the effect on the antigenicity of HBsAg. Methods: Quantitative determination of HBV - DNA with competent PCR microfluidic chit method was performed in eight sera specimens from seven hepatitis B patients with both positive HBsAg and HBsAb markers. HBV S-gene was amplified with nested PCR, the PCR product was directly examined for any sequence variant of the amino acids. HBV markers were tested with the very sensitive ELISA/MEIA method in these seven patients. The above rests were also performed in 15 children after failed immunization with hepatitis B vaccine and 9 recipients of liver transplantation for terminal hepatitis B treated with HBIG and lamivudine, serving as controls. Results: The HBsAb contents in the seven patients were all below 80 mIu/ml. Two of the patients with positive HBV-DNA showed no 'a' determinant variant. Two of the four HBV-DNA negative patients demonstrated amino-acid variants (126, 131). One patients who was originally HBV-DNA positive but later turned negative after treatment with interferon and lamivudine demonstrated variant (126). In the 9 patients after successful liver transplantation, the HBsAb contents were all about 150mIu/ml with negative HBV-DNA and no variant. In the 15 immunization failures, HBV-DNA was positive in 14 of them, with 2 cases of variant at 145, 1 case at 126 and 1 case at 134. Conclusion: In some patients with chronic B hepatitis with both positive HBsAg and HBsAb markers, as well as in some vaccine immunization failures, there were 'a' determinant variants, which might alter the antigenicity of HBsAg with escape from the neutralization of low HBsAb. The 'a' determinant variant might also affect the replication of the virus. In this study, no variant was shown in patients after successful liver transplantation. However, the number of patients was too small and the result was of no

  8. Non-coding keratin variants associate with liver fibrosis progression in patients with hemochromatosis.

    Directory of Open Access Journals (Sweden)

    Pavel Strnad

    Full Text Available BACKGROUND: Keratins 8 and 18 (K8/K18 are intermediate filament proteins that protect the liver from various forms of injury. Exonic K8/K18 variants associate with adverse outcome in acute liver failure and with liver fibrosis progression in patients with chronic hepatitis C infection or primary biliary cirrhosis. Given the association of K8/K18 variants with end-stage liver disease and progression in several chronic liver disorders, we studied the importance of keratin variants in patients with hemochromatosis. METHODS: The entire K8/K18 exonic regions were analyzed in 162 hemochromatosis patients carrying homozygous C282Y HFE (hemochromatosis gene mutations. 234 liver-healthy subjects were used as controls. Exonic regions were PCR-amplified and analyzed using denaturing high-performance liquid chromatography and DNA sequencing. Previously-generated transgenic mice overexpressing K8 G62C were studied for their susceptibility to iron overload. Susceptibility to iron toxicity of primary hepatocytes that express K8 wild-type and G62C was also assessed. RESULTS: We identified amino-acid-altering keratin heterozygous variants in 10 of 162 hemochromatosis patients (6.2% and non-coding heterozygous variants in 6 additional patients (3.7%. Two novel K8 variants (Q169E/R275W were found. K8 R341H was the most common amino-acid altering variant (4 patients, and exclusively associated with an intronic KRT8 IVS7+10delC deletion. Intronic, but not amino-acid-altering variants associated with the development of liver fibrosis. In mice, or ex vivo, the K8 G62C variant did not affect iron-accumulation in response to iron-rich diet or the extent of iron-induced hepatocellular injury. CONCLUSION: In patients with hemochromatosis, intronic but not exonic K8/K18 variants associate with liver fibrosis development.

  9. A case cluster of variant Creutzfeldt-Jakob disease linked to the Kingdom of Saudi Arabia.

    Science.gov (United States)

    Coulthart, Michael B; Geschwind, Michael D; Qureshi, Shireen; Phielipp, Nicolas; Demarsh, Alex; Abrams, Joseph Y; Belay, Ermias; Gambetti, Pierluigi; Jansen, Gerard H; Lang, Anthony E; Schonberger, Lawrence B

    2016-10-01

    As of mid-2016, 231 cases of variant Creutzfeldt-Jakob disease-the human form of a prion disease of cattle, bovine spongiform encephalopathy-have been reported from 12 countries. With few exceptions, the affected individuals had histories of extended residence in the UK or other Western European countries during the period (1980-96) of maximum global risk for human exposure to bovine spongiform encephalopathy. However, the possibility remains that other geographic foci of human infection exist, identification of which may help to foreshadow the future of the epidemic. We report results of a quantitative analysis of country-specific relative risks of infection for three individuals diagnosed with variant Creutzfeldt-Jakob disease in the USA and Canada. All were born and raised in Saudi Arabia, but had histories of residence and travel in other countries. To calculate country-specific relative probabilities of infection, we aligned each patient's life history with published estimates of probability distributions of incubation period and age at infection parameters from a UK cohort of 171 variant Creutzfeldt-Jakob disease cases. The distributions were then partitioned into probability density fractions according to time intervals of the patient's residence and travel history, and the density fractions were combined by country. This calculation was performed for incubation period alone, age at infection alone, and jointly for incubation and age at infection. Country-specific fractions were normalized either to the total density between the individual's dates of birth and symptom onset ('lifetime'), or to that between 1980 and 1996, for a total of six combinations of parameter and interval. The country-specific relative probability of infection for Saudi Arabia clearly ranked highest under each of the six combinations of parameter × interval for Patients 1 and 2, with values ranging from 0.572 to 0.998, respectively, for Patient 2 (age at infection × lifetime) and

  10. Importance of competing risks in the analysis of anti-epileptic drug failure

    Directory of Open Access Journals (Sweden)

    Sander Josemir W

    2007-03-01

    Full Text Available Abstract Background Retention time (time to treatment failure is a commonly used outcome in antiepileptic drug (AED studies. Methods Two datasets are used to demonstrate the issues in a competing risks analysis of AEDs. First, data collection and follow-up considerations are discussed with reference to information from 15 monotherapy trials. Recommendations for improved data collection and cumulative incidence analysis are then illustrated using the SANAD trial dataset. The results are compared to the more common approach using standard survival analysis methods. Results A non-significant difference in overall treatment failure time between gabapentin and topiramate (logrank test statistic = 0.01, 1 degree of freedom, p-value = 0.91 masked highly significant differences in opposite directions with gabapentin resulting in fewer withdrawals due to side effects (Gray's test statistic = 11.60, 1 degree of freedom, p = 0.0007 but more due to poor seizure control (Gray's test statistic = 14.47, 1 degree of freedom, p-value = 0.0001. The significant difference in overall treatment failure time between lamotrigine and carbamazepine (logrank test statistic = 5.6, 1 degree of freedom, p-value = 0.018 was due entirely to a significant benefit of lamotrigine in terms of side effects (Gray's test statistic = 10.27, 1 degree of freedom, p = 0.001. Conclusion Treatment failure time can be measured reliably but care is needed to collect sufficient information on reasons for drug withdrawal to allow a competing risks analysis. Important differences between the profiles of AEDs may be missed unless appropriate statistical methods are used to fully investigate treatment failure time. Cumulative incidence analysis allows comparison of the probability of failure between two AEDs and is likely to be a more powerful approach than logrank analysis for most comparisons of standard and new anti-epileptic drugs.

  11. Mixing Bayes and empirical Bayes inference to anticipate the realization of engineering concerns about variant system designs

    International Nuclear Information System (INIS)

    Quigley, John; Walls, Lesley

    2011-01-01

    Mixing Bayes and Empirical Bayes inference provides reliability estimates for variant system designs by using relevant failure data - observed and anticipated - about engineering changes arising due to modification and innovation. A coherent inference framework is proposed to predict the realization of engineering concerns during product development so that informed decisions can be made about the system design and the analysis conducted to prove reliability. The proposed method involves combining subjective prior distributions for the number of engineering concerns with empirical priors for the non-parametric distribution of time to realize these concerns in such a way that we can cross-tabulate classes of concerns to failure events within time partitions at an appropriate level of granularity. To support efficient implementation, a computationally convenient hypergeometric approximation is developed for the counting distributions appropriate to our underlying stochastic model. The accuracy of our approximation over first-order alternatives is examined, and demonstrated, through an evaluation experiment. An industrial application illustrates model implementation and shows how estimates can be updated using information arising during development test and analysis.

  12. Vibratory Urticaria Associated with a Missense Variant in ADGRE2.

    Science.gov (United States)

    Boyden, Steven E; Desai, Avanti; Cruse, Glenn; Young, Michael L; Bolan, Hyejeong C; Scott, Linda M; Eisch, A Robin; Long, R Daniel; Lee, Chyi-Chia R; Satorius, Colleen L; Pakstis, Andrew J; Olivera, Ana; Mullikin, James C; Chouery, Eliane; Mégarbané, André; Medlej-Hashim, Myrna; Kidd, Kenneth K; Kastner, Daniel L; Metcalfe, Dean D; Komarow, Hirsh D

    2016-02-18

    Patients with autosomal dominant vibratory urticaria have localized hives and systemic manifestations in response to dermal vibration, with coincident degranulation of mast cells and increased histamine levels in serum. We identified a previously unknown missense substitution in ADGRE2 (also known as EMR2), which was predicted to result in the replacement of cysteine with tyrosine at amino acid position 492 (p.C492Y), as the only nonsynonymous variant cosegregating with vibratory urticaria in two large kindreds. The ADGRE2 receptor undergoes autocatalytic cleavage, producing an extracellular subunit that noncovalently binds a transmembrane subunit. We showed that the variant probably destabilizes an autoinhibitory subunit interaction, sensitizing mast cells to IgE-independent vibration-induced degranulation. (Funded by the National Institutes of Health.).

  13. Real-time RT-PCR high-resolution melting curve analysis and multiplex RT-PCR to detect and differentiate grapevine leafroll-associated associated virus 3 variant groups I, II, III and VI

    Directory of Open Access Journals (Sweden)

    Bester Rachelle

    2012-09-01

    Full Text Available Abstract Background Grapevine leafroll-associated virus 3 (GLRaV-3 is the main contributing agent of leafroll disease worldwide. Four of the six GLRaV-3 variant groups known have been found in South Africa, but their individual contribution to leafroll disease is unknown. In order to study the pathogenesis of leafroll disease, a sensitive and accurate diagnostic assay is required that can detect different variant groups of GLRaV-3. Methods In this study, a one-step real-time RT-PCR, followed by high-resolution melting (HRM curve analysis for the simultaneous detection and identification of GLRaV-3 variants of groups I, II, III and VI, was developed. A melting point confidence interval for each variant group was calculated to include at least 90% of all melting points observed. A multiplex RT-PCR protocol was developed to these four variant groups in order to assess the efficacy of the real-time RT-PCR HRM assay. Results A universal primer set for GLRaV-3 targeting the heat shock protein 70 homologue (Hsp70h gene of GLRaV-3 was designed that is able to detect GLRaV-3 variant groups I, II, III and VI and differentiate between them with high-resolution melting curve analysis. The real-time RT-PCR HRM and the multiplex RT-PCR were optimized using 121 GLRaV-3 positive samples. Due to a considerable variation in melting profile observed within each GLRaV-3 group, a confidence interval of above 90% was calculated for each variant group, based on the range and distribution of melting points. The intervals of groups I and II could not be distinguished and a 95% joint confidence interval was calculated for simultaneous detection of group I and II variants. An additional primer pair targeting GLRaV-3 ORF1a was developed that can be used in a subsequent real-time RT-PCR HRM to differentiate between variants of groups I and II. Additionally, the multiplex RT-PCR successfully validated 94.64% of the infections detected with the real-time RT-PCR HRM

  14. Cellulase variants

    Science.gov (United States)

    Blazej, Robert; Toriello, Nicholas; Emrich, Charles; Cohen, Richard N.; Koppel, Nitzan

    2015-07-14

    This invention provides novel variant cellulolytic enzymes having improved activity and/or stability. In certain embodiments the variant cellulotyic enzymes comprise a glycoside hydrolase with or comprising a substitution at one or more positions corresponding to one or more of residues F64, A226, and/or E246 in Thermobifida fusca Cel9A enzyme. In certain embodiments the glycoside hydrolase is a variant of a family 9 glycoside hydrolase. In certain embodiments the glycoside hydrolase is a variant of a theme B family 9 glycoside hydrolase.

  15. Stochastic Stability for Time-Delay Markovian Jump Systems with Sector-Bounded Nonlinearities and More General Transition Probabilities

    Directory of Open Access Journals (Sweden)

    Dan Ye

    2013-01-01

    Full Text Available This paper is concerned with delay-dependent stochastic stability for time-delay Markovian jump systems (MJSs with sector-bounded nonlinearities and more general transition probabilities. Different from the previous results where the transition probability matrix is completely known, a more general transition probability matrix is considered which includes completely known elements, boundary known elements, and completely unknown ones. In order to get less conservative criterion, the state and transition probability information is used as much as possible to construct the Lyapunov-Krasovskii functional and deal with stability analysis. The delay-dependent sufficient conditions are derived in terms of linear matrix inequalities to guarantee the stability of systems. Finally, numerical examples are exploited to demonstrate the effectiveness of the proposed method.

  16. Prolonged warm ischemia time is associated with graft failure and mortality after kidney transplantation.

    Science.gov (United States)

    Tennankore, Karthik K; Kim, S Joseph; Alwayn, Ian P J; Kiberd, Bryce A

    2016-03-01

    Warm ischemia time is a potentially modifiable insult to transplanted kidneys, but little is known about its effect on long-term outcomes. Here we conducted a study of United States kidney transplant recipients (years 2000-2013) to determine the association between warm ischemia time (the time from organ removal from cold storage to reperfusion with warm blood) and death/graft failure. Times under 10 minutes were potentially attributed to coding error. Therefore, the 10-to-under-20-minute interval was chosen as the reference group. The primary outcome was mortality and graft failure (return to chronic dialysis or preemptive retransplantation) adjusted for recipient, donor, immunologic, and surgical factors. The study included 131,677 patients with 35,901 events. Relative to the reference patients, times of 10 to under 20, 20 to under 30, 30 to under 40, 40 to under 50, 50 to under 60, and 60 and more minutes were associated with hazard ratios of 1.07 (95% confidence interval, 0.99-1.15), 1.13 (1.06-1.22), 1.17 (1.09-1.26), 1.20 (1.12-1.30), and 1.23 (1.15-1.33) for the composite event, respectively. Association between prolonged warm ischemia time and death/graft failure persisted after stratification by donor type (living vs. deceased donor) and delayed graft function status. Thus, warm ischemia time is associated with adverse long-term patient and graft survival after kidney transplantation. Identifying strategies to reduce warm ischemia time is an important consideration for future study. Copyright © 2015 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  17. Conditional probability of the tornado missile impact given a tornado occurrence

    International Nuclear Information System (INIS)

    Goodman, J.; Koch, J.E.

    1982-01-01

    Using an approach based on statistical mechanics, an expression for the probability of the first missile strike is developed. The expression depends on two generic parameters (injection probability eta(F) and height distribution psi(Z,F)), which are developed in this study, and one plant specific parameter (number of potential missiles N/sub p/). The expression for the joint probability of simultaneous impact of muitiple targets is also developed. This espression is applicable to calculation of the probability of common cause failure due to tornado missiles. It is shown that the probability of the first missile strike can be determined using a uniform missile distribution model. It is also shown that the conditional probability of the second strike, given the first, is underestimated by the uniform model. The probability of the second strike is greatly increased if the missiles are in clusters large enough to cover both targets

  18. Time variant layer control in atmospheric pressure chemical vapor deposition based growth of graphene

    KAUST Repository

    Qaisi, Ramy M.; Smith, Casey; Hussain, Muhammad Mustafa

    2013-01-01

    Graphene is a semi-metallic, transparent, atomic crystal structure material which is promising for its high mobility, strength and transparency - potentially applicable for radio frequency (RF) circuitry and energy harvesting and storage applications. Uniform (same number of layers), continuous (not torn or discontinuous), large area (100 mm to 200 mm wafer scale), low-cost, reliable growth are the first hand challenges for its commercialization prospect. We show a time variant uniform (layer control) growth of bi- to multi-layer graphene using atmospheric chemical vapor deposition system. We use Raman spectroscopy for physical characterization supported by electrical property analysis. © 2013 IEEE.

  19. Time variant layer control in atmospheric pressure chemical vapor deposition based growth of graphene

    KAUST Repository

    Qaisi, Ramy M.

    2013-04-01

    Graphene is a semi-metallic, transparent, atomic crystal structure material which is promising for its high mobility, strength and transparency - potentially applicable for radio frequency (RF) circuitry and energy harvesting and storage applications. Uniform (same number of layers), continuous (not torn or discontinuous), large area (100 mm to 200 mm wafer scale), low-cost, reliable growth are the first hand challenges for its commercialization prospect. We show a time variant uniform (layer control) growth of bi- to multi-layer graphene using atmospheric chemical vapor deposition system. We use Raman spectroscopy for physical characterization supported by electrical property analysis. © 2013 IEEE.

  20. Continuous time random walk model with asymptotical probability density of waiting times via inverse Mittag-Leffler function

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2018-04-01

    The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.

  1. Earth reencounter probabilities for aborted space disposal of hazardous nuclear waste

    Science.gov (United States)

    Friedlander, A. L.; Feingold, H.

    1977-01-01

    A quantitative assessment is made of the long-term risk of earth reencounter and reentry associated with aborted disposal of hazardous material in the space environment. Numerical results are presented for 10 candidate disposal options covering a broad spectrum of disposal destinations and deployment propulsion systems. Based on representative models of system failure, the probability that a single payload will return and collide with earth within a period of 250,000 years is found to lie in the range .0002-.006. Proportionately smaller risk attaches to shorter time intervals. Risk-critical factors related to trajectory geometry and system reliability are identified as possible mechanisms of hazard reduction.

  2. Canonical failure modes of real-time control systems: insights from cognitive theory

    Science.gov (United States)

    Wallace, Rodrick

    2016-04-01

    Newly developed necessary conditions statistical models from cognitive theory are applied to generalisation of the data-rate theorem for real-time control systems. Rather than graceful degradation under stress, automatons and man/machine cockpits appear prone to characteristic sudden failure under demanding fog-of-war conditions. Critical dysfunctions span a spectrum of phase transition analogues, ranging from a ground state of 'all targets are enemies' to more standard data-rate instabilities. Insidious pathologies also appear possible, akin to inattentional blindness consequent on overfocus on an expected pattern. Via no-free-lunch constraints, different equivalence classes of systems, having structure and function determined by 'market pressures', in a large sense, will be inherently unreliable under different but characteristic canonical stress landscapes, suggesting that deliberate induction of failure may often be relatively straightforward. Focusing on two recent military case histories, these results provide a caveat emptor against blind faith in the current path-dependent evolutionary trajectory of automation for critical real-time processes.

  3. Understanding failures in petascale computers

    International Nuclear Information System (INIS)

    Schroeder, Bianca; Gibson, Garth A

    2007-01-01

    With petascale computers only a year or two away there is a pressing need to anticipate and compensate for a probable increase in failure and application interruption rates. Researchers, designers and integrators have available to them far too little detailed information on the failures and interruptions that even smaller terascale computers experience. The information that is available suggests that application interruptions will become far more common in the coming decade, and the largest applications may surrender large fractions of the computer's resources to taking checkpoints and restarting from a checkpoint after an interruption. This paper reviews sources of failure information for compute clusters and storage systems, projects failure rates and the corresponding decrease in application effectiveness, and discusses coping strategies such as application-level checkpoint compression and system level process-pairs fault-tolerance for supercomputing. The need for a public repository for detailed failure and interruption records is particularly concerning, as projections from one architectural family of machines to another are widely disputed. To this end, this paper introduces the Computer Failure Data Repository and issues a call for failure history data to publish in it

  4. Scalable Failure Masking for Stencil Computations using Ghost Region Expansion and Cell to Rank Remapping

    International Nuclear Information System (INIS)

    Gamell, Marc; Kolla, Hemanth; Mayo, Jackson; Heroux, Michael A.

    2017-01-01

    In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments. In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.

  5. Multi-state systems with selective propagated failures and imperfect individual and group protections

    International Nuclear Information System (INIS)

    Levitin, Gregory; Xing Liudong; Ben-Haim, Hanoch; Da, Yuanshun

    2011-01-01

    The paper presents an algorithm for evaluating performance distribution of complex series–parallel multi-state systems with propagated failures and imperfect protections. The failure propagation can have a selective effect, which means that the failures originated from different system elements can cause failures of different subsets of elements. Individual elements or some disjoint groups of elements can be protected from propagation of failures originated outside the group. The protections can fail with given probabilities. The suggested algorithm is based on the universal generating function approach and a generalized reliability block diagram method. The performance distribution evaluation procedure is repeated for each combination of propagated failures and protection failures. Both an analytical example and a numerical example are provided to illustrate the suggested algorithm. - Highlights: ► Systems with propagated failures and imperfect protections are considered. ► Failures originated from different elements can affect different subsets of elements. ► Protections of individual elements or groups of elements can fail with given probabilities. ► An algorithm for evaluating multi-state system performance distribution is suggested.

  6. Dynamic Forecasting Conditional Probability of Bombing Attacks Based on Time-Series and Intervention Analysis.

    Science.gov (United States)

    Li, Shuying; Zhuang, Jun; Shen, Shifei

    2017-07-01

    In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time-series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time-series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two-month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time-series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014. © 2016 Society for Risk Analysis.

  7. Dependent failure analysis of NPP data bases

    International Nuclear Information System (INIS)

    Cooper, S.E.; Lofgren, E.V.; Samanta, P.K.; Wong Seemeng

    1993-01-01

    A technical approach for analyzing plant-specific data bases for vulnerabilities to dependent failures has been developed and applied. Since the focus of this work is to aid in the formulation of defenses to dependent failures, rather than to quantify dependent failure probabilities, the approach of this analysis is critically different. For instance, the determination of component failure dependencies has been based upon identical failure mechanisms related to component piecepart failures, rather than failure modes. Also, component failures involving all types of component function loss (e.g., catastrophic, degraded, incipient) are equally important to the predictive purposes of dependent failure defense development. Consequently, dependent component failures are identified with a different dependent failure definition which uses a component failure mechanism categorization scheme in this study. In this context, clusters of component failures which satisfy the revised dependent failure definition are termed common failure mechanism (CFM) events. Motor-operated valves (MOVs) in two nuclear power plant data bases have been analyzed with this approach. The analysis results include seven different failure mechanism categories; identified potential CFM events; an assessment of the risk-significance of the potential CFM events using existing probabilistic risk assessments (PRAs); and postulated defenses to the identified potential CFM events. (orig.)

  8. The curation of genetic variants: difficulties and possible solutions.

    Science.gov (United States)

    Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar

    2012-12-01

    The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. Copyright © 2012. Published by Elsevier Ltd.

  9. Time-to-Furosemide Treatment and Mortality in Patients Hospitalized With Acute Heart Failure

    NARCIS (Netherlands)

    Matsue, Yuya; Damman, Kevin; Voors, Adriaan A.; Kagiyama, Nobuyuki; Yamaguchi, Tetsuo; Kuroda, Shunsuke; Okumura, Takahiro; Kida, Keisuke; Mizuno, Atsushi; Oishi, Shogo; Inuzuka, Yasutaka; Akiyama, Eiichi; Matsukawa, Ryuichi; Kato, Kota; Suzuki, Satoshi; Naruke, Takashi; Yoshioka, Kenji; Miyoshi, Tatsuya; Baba, Yuichi; Yamamoto, Masayoshi; Murai, Koji; Mizutani, Kazuo; Yoshida, Kazuki; Kitai, Takeshi

    2017-01-01

    BACKGROUND Acute heart failure (AHF) is a life-threatening disease requiring urgent treatment, including a recommendation for immediate initiation of loop diuretics. OBJECTIVES The authors prospectively evaluated the association between time-to-diuretic treatment and clinical outcome. METHODS

  10. A pragmatic approach to estimate alpha factors for common cause failure analysis

    International Nuclear Information System (INIS)

    Hassija, Varun; Senthil Kumar, C.; Velusamy, K.

    2014-01-01

    Highlights: • Estimation of coefficients in alpha factor model for common cause analysis. • A derivation of plant specific alpha factors is demonstrated. • We examine sensitivity of common cause contribution to total system failure. • We compare beta factor and alpha factor models for various redundant configurations. • The use of alpha factors is preferable, especially for large redundant systems. - Abstract: Most of the modern technological systems are deployed with high redundancy but still they fail mainly on account of common cause failures (CCF). Various models such as Beta Factor, Multiple Greek Letter, Binomial Failure Rate and Alpha Factor exists for estimation of risk from common cause failures. Amongst all, alpha factor model is considered most suitable for high redundant systems as it arrives at common cause failure probabilities from a set of ratios of failures and the total component failure probability Q T . In the present study, alpha factor model is applied for the assessment of CCF of safety systems deployed at two nuclear power plants. A method to overcome the difficulties in estimation of the coefficients viz., alpha factors in the model, importance of deriving plant specific alpha factors and sensitivity of common cause contribution to the total system failure probability with respect to hazard imposed by various CCF events is highlighted. An approach described in NUREG/CR-5500 is extended in this study to provide more explicit guidance for a statistical approach to derive plant specific coefficients for CCF analysis especially for high redundant systems. The procedure is expected to aid regulators for independent safety assessment

  11. A recursive framework for time-dependent characteristics of tested and maintained standby units with arbitrary distributions for failures and repairs

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2015-01-01

    The time-dependent unavailability and the failure and repair intensities of periodically tested aging standby system components are solved with recursive equations under three categories of testing and repair policies. In these policies, tests or repairs or both can be minimal or perfect renewals. Arbitrary distributions are allowed to times to failure as well as to repair and renewal durations. Major preventive maintenance is done periodically or at random times, e.g. when a true demand occurs. In the third option process renewal is done if a true demand occurs or when a certain mission time has expired since the previous maintenance, whichever occurs first. A practical feature is that even if a repair can renew the unit, it does not generally renew the alternating process. The formalism updates and extends earlier results by using a special backward-renewal equation method, by allowing scheduled tests not limited to equal intervals and accepting arbitrary distributions and multiple failure types and causes, including failures caused by tests, human errors and true demands. Explicit solutions are produced to integral equations associated with an age-renewal maintenance policy. - Highlights: • Time-dependent unavailability, failure count and repair count for a standby system. • Free testing schedule and distributions for times to failure, repair and maintenance. • Multiple failure modes; tests or repairs or both can be minimal or perfect renewals. • Process renewals periodically, randomly or based on the process age or an initiator. • Backward renewal equations as explicit solutions to Volterra-type integral equations

  12. Prion infectivity in the spleen of a PRNP heterozygous individual with subclinical variant Creutzfeldt-Jakob disease.

    Science.gov (United States)

    Bishop, Matthew T; Diack, Abigail B; Ritchie, Diane L; Ironside, James W; Will, Robert G; Manson, Jean C

    2013-04-01

    Blood transfusion has been identified as a source of human-to-human transmission of variant Creutzfeldt-Jakob disease. Three cases of variant Creutzfeldt-Jakob disease have been identified following red cell transfusions from donors who subsequently developed variant Creutzfeldt-Jakob disease and an asymptomatic red cell transfusion recipient, who did not die of variant Creutzfeldt-Jakob disease, has been identified with prion protein deposition in the spleen and a lymph node, but not the brain. This individual was heterozygous (MV) at codon 129 of the prion protein gene (PRNP), whereas all previous definite and probable cases of variant Creutzfeldt-Jakob disease have been methionine homozygotes (MM). A critical question for public health is whether the prion protein deposition reported in peripheral tissues from this MV individual correlates with infectivity. Additionally it is important to establish whether the PRNP codon 129 genotype has influenced the transmission characteristics of the infectious agent. Brain and spleen from the MV blood recipient were inoculated into murine strains that have consistently demonstrated transmission of the variant Creutzfeldt-Jakob disease agent. Mice were assessed for clinical and pathological signs of disease and transmission data were compared with other transmission studies in variant Creutzfeldt-Jakob disease, including those on the spleen and brain of the donor to the index case. Transmission of variant Creutzfeldt-Jakob disease was observed from the MV blood recipient spleen, but not from the brain, whereas there was transmission from both spleen and brain tissues from the red blood cell donor. Longer incubation times were observed for the blood donor spleen inoculum compared with the blood donor brain inoculum, suggesting lower titres of infectivity in the spleen. The distribution of vacuolar pathology and abnormal prion protein in infected mice were similar following inoculation with both donor and recipient spleen

  13. Prion infectivity in the spleen of a PRNP heterozygous individual with subclinical variant Creutzfeldt–Jakob disease

    Science.gov (United States)

    Bishop, Matthew T.; Diack, Abigail B.; Ritchie, Diane L.; Ironside, James W.; Will, Robert G.

    2013-01-01

    Blood transfusion has been identified as a source of human-to-human transmission of variant Creutzfeldt–Jakob disease. Three cases of variant Creutzfeldt–Jakob disease have been identified following red cell transfusions from donors who subsequently developed variant Creutzfeldt–Jakob disease and an asymptomatic red cell transfusion recipient, who did not die of variant Creutzfeldt–Jakob disease, has been identified with prion protein deposition in the spleen and a lymph node, but not the brain. This individual was heterozygous (MV) at codon 129 of the prion protein gene (PRNP), whereas all previous definite and probable cases of variant Creutzfeldt–Jakob disease have been methionine homozygotes (MM). A critical question for public health is whether the prion protein deposition reported in peripheral tissues from this MV individual correlates with infectivity. Additionally it is important to establish whether the PRNP codon 129 genotype has influenced the transmission characteristics of the infectious agent. Brain and spleen from the MV blood recipient were inoculated into murine strains that have consistently demonstrated transmission of the variant Creutzfeldt–Jakob disease agent. Mice were assessed for clinical and pathological signs of disease and transmission data were compared with other transmission studies in variant Creutzfeldt–Jakob disease, including those on the spleen and brain of the donor to the index case. Transmission of variant Creutzfeldt–Jakob disease was observed from the MV blood recipient spleen, but not from the brain, whereas there was transmission from both spleen and brain tissues from the red blood cell donor. Longer incubation times were observed for the blood donor spleen inoculum compared with the blood donor brain inoculum, suggesting lower titres of infectivity in the spleen. The distribution of vacuolar pathology and abnormal prion protein in infected mice were similar following inoculation with both donor and

  14. Complex phenotype of dyskeratosis congenita and mood dysregulation with novel homozygous RTEL1 and TPH1 variants.

    Science.gov (United States)

    Ungar, Rachel A; Giri, Neelam; Pao, Maryland; Khincha, Payal P; Zhou, Weiyin; Alter, Blanche P; Savage, Sharon A

    2018-06-01

    Dyskeratosis congenita (DC) is an inherited bone marrow failure syndrome caused by germline mutations in telomere biology genes. Patients have extremely short telomeres for their age and a complex phenotype including oral leukoplakia, abnormal skin pigmentation, and dysplastic nails in addition to bone marrow failure, pulmonary fibrosis, stenosis of the esophagus, lacrimal ducts and urethra, developmental anomalies, and high risk of cancer. We evaluated a patient with features of DC, mood dysregulation, diabetes, and lack of pubertal development. Family history was not available but genome-wide genotyping was consistent with consanguinity. Whole exome sequencing identified 82 variants of interest in 80 genes based on the following criteria: homozygous, <0.1% minor allele frequency in public and in-house databases, nonsynonymous, and predicted deleterious by multiple in silico prediction programs. Six genes were identified likely contributory to the clinical presentation. The cause of DC is likely due to homozygous splice site variants in regulator of telomere elongation helicase 1, a known DC and telomere biology gene. A homozygous, missense variant in tryptophan hydroxylase 1 may be clinically important as this gene encodes the rate limiting step in serotonin biosynthesis, a biologic pathway connected with mood disorders. Four additional genes (SCN4A, LRP4, GDAP1L1, and SPTBN5) had rare, missense homozygous variants that we speculate may contribute to portions of the clinical phenotype. This case illustrates the value of conducting detailed clinical and genomic evaluations on rare patients in order to identify new areas of research into the functional consequences of rare variants and their contribution to human disease. © 2018 Wiley Periodicals, Inc.

  15. Fault Detection Variants of the CloudBus Protocol for IoT Distributed Embedded Systems

    Directory of Open Access Journals (Sweden)

    BARKALOV, A.

    2017-05-01

    Full Text Available Distributed embedded systems have become larger, more complex and complicated. More often, such systems operate accordingly to the IoT or Industry 4.0 concept. However, large number of end modules operating in the system leads to a significant load and consequently, to an overload of the communication interfaces. The CloudBus protocol is one of the methods which is used for data exchange and concurrent process synchronization in the distributed systems. It allows the significant savings in the amount of transmitted data between end modules, especially when compared with the other protocols used in the industry. Nevertheless, basic version of the protocol does not protect against the system failure in the event of failure of one of the nodes. This paper proposes four novel variants of the CloudBus protocol, which allow the fault detection. The comparison and performance analysis was executed for all proposed CloudBus variants. The verification and behavior analysis of the distributed systems were performed on SoC hardware research platform. Furthermore, a simple test application was proposed.

  16. Single shell tank sluicing history and failure frequency

    International Nuclear Information System (INIS)

    HERTZEL, J.S.

    1998-01-01

    This document assesses the potential for failure of the single-shell tanks (SSTs) that are presumably sound and helps to establish the retrieval priorities for these and the assumed leakers. Furthermore, this report examines probabilities of SST failure as a function of age and operational history, and provides a simple statistical summary of historical leak volumes, leak rates, and corrosion factor

  17. Temporal-varying failures of nodes in networks

    Science.gov (United States)

    Knight, Georgie; Cristadoro, Giampaolo; Altmann, Eduardo G.

    2015-08-01

    We consider networks in which random walkers are removed because of the failure of specific nodes. We interpret the rate of loss as a measure of the importance of nodes, a notion we denote as failure centrality. We show that the degree of the node is not sufficient to determine this measure and that, in a first approximation, the shortest loops through the node have to be taken into account. We propose approximations of the failure centrality which are valid for temporal-varying failures, and we dwell on the possibility of externally changing the relative importance of nodes in a given network by exploiting the interference between the loops of a node and the cycles of the temporal pattern of failures. In the limit of long failure cycles we show analytically that the escape in a node is larger than the one estimated from a stochastic failure with the same failure probability. We test our general formalism in two real-world networks (air-transportation and e-mail users) and show how communities lead to deviations from predictions for failures in hubs.

  18. Determining Component Probability using Problem Report Data for Ground Systems used in Manned Space Flight

    Science.gov (United States)

    Monaghan, Mark W.; Gillespie, Amanda M.

    2013-01-01

    During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.

  19. Use on non-conjugate prior distributions in compound failure models. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.

    1981-12-01

    Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report

  20. Beyond reliability, multi-state failure analysis of satellite subsystems: A statistical approach

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Reliability is widely recognized as a critical design attribute for space systems. In recent articles, we conducted nonparametric analyses and Weibull fits of satellite and satellite subsystems reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we extend our investigation of failures of satellites and satellite subsystems beyond the binary concept of reliability to the analysis of their anomalies and multi-state failures. In reliability analysis, the system or subsystem under study is considered to be either in an operational or failed state; multi-state failure analysis introduces 'degraded states' or partial failures, and thus provides more insights through finer resolution into the degradation behavior of an item and its progression towards complete failure. The database used for the statistical analysis in the present work identifies five states for each satellite subsystem: three degraded states, one fully operational state, and one failed state (complete failure). Because our dataset is right-censored, we calculate the nonparametric probability of transitioning between states for each satellite subsystem with the Kaplan-Meier estimator, and we derive confidence intervals for each probability of transitioning between states. We then conduct parametric Weibull fits of these probabilities using the Maximum Likelihood Estimation (MLE) approach. After validating the results, we compare the reliability versus multi-state failure analyses of three satellite subsystems: the thruster/fuel; the telemetry, tracking, and control (TTC); and the gyro/sensor/reaction wheel subsystems. The results are particularly revealing of the insights that can be gleaned from multi-state failure analysis and the deficiencies, or blind spots, of the traditional reliability analysis. In addition to the specific results provided here, which should prove particularly useful to the space industry, this work highlights the importance

  1. CDKL5 variants

    Science.gov (United States)

    Kalscheuer, Vera M.; Hennig, Friederike; Leonard, Helen; Downs, Jenny; Clarke, Angus; Benke, Tim A.; Armstrong, Judith; Pineda, Mercedes; Bailey, Mark E.S.; Cobb, Stuart R.

    2017-01-01

    Objective: To provide new insights into the interpretation of genetic variants in a rare neurologic disorder, CDKL5 deficiency, in the contexts of population sequencing data and an updated characterization of the CDKL5 gene. Methods: We analyzed all known potentially pathogenic CDKL5 variants by combining data from large-scale population sequencing studies with CDKL5 variants from new and all available clinical cohorts and combined this with computational methods to predict pathogenicity. Results: The study has identified several variants that can be reclassified as benign or likely benign. With the addition of novel CDKL5 variants, we confirm that pathogenic missense variants cluster in the catalytic domain of CDKL5 and reclassify a purported missense variant as having a splicing consequence. We provide further evidence that missense variants in the final 3 exons are likely to be benign and not important to disease pathology. We also describe benign splicing and nonsense variants within these exons, suggesting that isoform hCDKL5_5 is likely to have little or no neurologic significance. We also use the available data to make a preliminary estimate of minimum incidence of CDKL5 deficiency. Conclusions: These findings have implications for genetic diagnosis, providing evidence for the reclassification of specific variants previously thought to result in CDKL5 deficiency. Together, these analyses support the view that the predominant brain isoform in humans (hCDKL5_1) is crucial for normal neurodevelopment and that the catalytic domain is the primary functional domain. PMID:29264392

  2. Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake

    Science.gov (United States)

    Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.

    2013-05-01

    Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.

  3. Visibility graph analysis of heart rate time series and bio-marker of congestive heart failure

    Science.gov (United States)

    Bhaduri, Anirban; Bhaduri, Susmita; Ghosh, Dipak

    2017-09-01

    Study of RR interval time series for Congestive Heart Failure had been an area of study with different methods including non-linear methods. In this article the cardiac dynamics of heart beat are explored in the light of complex network analysis, viz. visibility graph method. Heart beat (RR Interval) time series data taken from Physionet database [46, 47] belonging to two groups of subjects, diseased (congestive heart failure) (29 in number) and normal (54 in number) are analyzed with the technique. The overall results show that a quantitative parameter can significantly differentiate between the diseased subjects and the normal subjects as well as different stages of the disease. Further, the data when split into periods of around 1 hour each and analyzed separately, also shows the same consistent differences. This quantitative parameter obtained using the visibility graph analysis thereby can be used as a potential bio-marker as well as a subsequent alarm generation mechanism for predicting the onset of Congestive Heart Failure.

  4. The Concepts of Pseudo Compound Poisson and Partition Representations in Discrete Probability

    Directory of Open Access Journals (Sweden)

    Werner Hürlimann

    2015-01-01

    Full Text Available The mathematical/statistical concepts of pseudo compound Poisson and partition representations in discrete probability are reviewed and clarified. A combinatorial interpretation of the convolution of geometric distributions in terms of a variant of Newton’s identities is obtained. The practical use of the twofold convolution leads to an improved goodness-of-fit for a data set from automobile insurance that was up to now not fitted satisfactorily.

  5. A case of multiple organ failure induced by postoperative radiation therapy probably evoking oxidative stress

    International Nuclear Information System (INIS)

    Soejima, Akinori; Ishizuka, Shynji; Suzuki, Michihiko; Minoshima, Shinobu; Nakabayashi, Kimimasa; Kitamoto, Kiyoshi; Nagasawa, Toshihiko

    1995-01-01

    In recent years, several laboratories have suggested that serum levels of antioxidant activity and redox balance are reduced in patients with chronic renal failure. Some clinical reports have also proposed that defective serum antioxidative enzymes may contribute to a certain uremic toxicity through peroxidative cell damage. A 48-year-old woman was referred to us from the surgical department of our hospital because of consciousness disturbance, panctytopenia and acute acceleration of chronic azotemia after postoperative radiation therapy. We diagnosed acute acceleration of chronic renal failure with severe acidemia and started hemodialysis therapy immediately. Two days after admission to our department, she developed upper abdominal sharp pain and bradyarrhythmia. Serum amylase activity was elevated markedly and the ECG finding showed myocardial ischemia. On the 24th hospital day these complications were treated successfully with conservative therapy and hemodialysis. We considered that radiation therapy in this patient with chronic renal failure evoked marked oxidative stress and that deficiency of transferrin played an important role in peroxidative cell damage. (author)

  6. Non-linear time variant model intended for polypyrrole-based actuators

    Science.gov (United States)

    Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh

    2014-03-01

    Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.

  7. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Qing Ye

    2015-01-01

    Full Text Available This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach.

  8. A comparative study of failure criteria in probabilistic fields and stochastic failure envelopes of composite materials

    International Nuclear Information System (INIS)

    Nakayasu, Hidetoshi; Maekawa, Zen'ichiro

    1997-01-01

    One of the major objectives of this paper is to offer a practical tool for materials design of unidirectional composite laminates under in-plane multiaxial load. Design-oriented failure criteria of composite materials are applied to construct the evaluation model of probabilistic safety based on the extended structural reliability theory. Typical failure criteria such as maximum stress, maximum strain and quadratic polynomial failure criteria are compared from the viewpoint of reliability-oriented materials design of composite materials. The new design diagram which shows the feasible region on in-plane strain space and corresponds to safety index or failure probability is also proposed. These stochastic failure envelope diagrams which are drawn in in-plane strain space enable one to evaluate the stochastic behavior of a composite laminate with any lamination angle under multi-axial stress or strain condition. Numerical analysis for a graphite/epoxy laminate of T300/5208 is shown for the comparative verification of failure criteria under the various combinations of multi-axial load conditions and lamination angles. The stochastic failure envelopes of T300/5208 were also described in in-plane strain space

  9. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  10. Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre

    Science.gov (United States)

    Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip

    2013-04-01

    Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in

  11. Processor tradeoffs in distributed real-time systems

    Science.gov (United States)

    Krishna, C. M.; Shin, Kang G.; Bhandari, Inderpal S.

    1987-01-01

    The problem of the optimization of the design of real-time distributed systems is examined with reference to a class of computer architectures similar to the continuously reconfigurable multiprocessor flight control system structure, CM2FCS. Particular attention is given to the impact of processor replacement and the burn-in time on the probability of dynamic failure and mean cost. The solution is obtained numerically and interpreted in the context of real-time applications.

  12. The Improved Adaptive Silence Period Algorithm over Time-Variant Channels in the Cognitive Radio System

    Directory of Open Access Journals (Sweden)

    Jingbo Zhang

    2018-01-01

    Full Text Available In the field of cognitive radio spectrum sensing, the adaptive silence period management mechanism (ASPM has improved the problem of the low time-resource utilization rate of the traditional silence period management mechanism (TSPM. However, in the case of the low signal-to-noise ratio (SNR, the ASPM algorithm will increase the probability of missed detection for the primary user (PU. Focusing on this problem, this paper proposes an improved adaptive silence period management (IA-SPM algorithm which can adaptively adjust the sensing parameters of the current period in combination with the feedback information from the data communication with the sensing results of the previous period. The feedback information in the channel is achieved with frequency resources rather than time resources in order to adapt to the parameter change in the time-varying channel. The Monte Carlo simulation results show that the detection probability of the IA-SPM is 10–15% higher than that of the ASPM under low SNR conditions.

  13. Natural history of β-cell adaptation and failure in type 2 diabetes

    Science.gov (United States)

    Alejandro, Emilyn U.; Gregg, Brigid; Blandino-Rosano, Manuel; Cras-Méneur, Corentin; Bernal-Mizrachi, Ernesto

    2014-01-01

    Type 2 diabetes mellitus (T2D) is a complex disease characterized by β-cell failure in the setting of insulin resistance. The current evidence suggests that genetic predisposition, and environmental factors can impair the capacity of the β-cells to respond to insulin resistance and ultimately lead to their failure. However, genetic studies have demonstrated that known variants account for less than 10% of the overall estimated T2D risk, suggesting that additional unidentified factors contribute to susceptibility of this disease. In this review, we will discuss the different stages that contribute to the development of β-cell failure in T2D. We divide the natural history of this process in three major stages: susceptibility, β-cell adaptation and β-cell failure and provide an overview of the molecular mechanisms involved. Further research into mechanisms will reveal key modulators of β-cell failure and thus identify possible novel therapeutic targets and potential interventions to protect against β-cell failure. PMID:25542976

  14. Novel microcephalic primordial dwarfism disorder associated with variants in the centrosomal protein ninein.

    Science.gov (United States)

    Dauber, Andrew; Lafranchi, Stephen H; Maliga, Zoltan; Lui, Julian C; Moon, Jennifer E; McDeed, Cailin; Henke, Katrin; Zonana, Jonathan; Kingman, Garrett A; Pers, Tune H; Baron, Jeffrey; Rosenfeld, Ron G; Hirschhorn, Joel N; Harris, Matthew P; Hwa, Vivian

    2012-11-01

    Microcephalic primordial dwarfism (MPD) is a rare, severe form of human growth failure in which growth restriction is evident in utero and continues into postnatal life. Single causative gene defects have been identified in a number of patients with MPD, and all involve genes fundamental to cellular processes including centrosome functions. The objective of the study was to find the genetic etiology of a novel presentation of MPD. The design of the study was whole-exome sequencing performed on two affected sisters in a single family. Molecular and functional studies of a candidate gene were performed using patient-derived primary fibroblasts and a zebrafish morpholino oligonucleotides knockdown model. Two sisters presented with a novel subtype of MPD, including severe intellectual disabilities. NIN, encoding Ninein, a centrosomal protein critically involved in asymmetric cell division, was identified as a candidate gene, and functional impacts in fibroblasts and zebrafish were studied. From 34,606 genomic variants, two very rare missense variants in NIN were identified. Both probands were compound heterozygotes. In the zebrafish, ninein knockdown led to specific and novel defects in the specification and morphogenesis of the anterior neuroectoderm, resulting in a deformity of the developing cranium with a small, squared skull highly reminiscent of the human phenotype. We identified a novel clinical subtype of MPD in two sisters who have rare variants in NIN. We show, for the first time, that reduction of ninein function in the developing zebrafish leads to specific deficiencies of brain and skull development, offering a developmental basis for the myriad phenotypes in our patients.

  15. Cueing spatial attention through timing and probability.

    Science.gov (United States)

    Girardi, Giovanna; Antonucci, Gabriella; Nico, Daniele

    2013-01-01

    Even when focused on an effortful task we retain the ability to detect salient environmental information, and even irrelevant visual stimuli can be automatically detected. However, to which extent unattended information affects attentional control is not fully understood. Here we provide evidences of how the brain spontaneously organizes its cognitive resources by shifting attention between a selective-attending and a stimulus-driven modality within a single task. Using a spatial cueing paradigm we investigated the effect of cue-target asynchronies as a function of their probabilities of occurrence (i.e., relative frequency). Results show that this accessory information modulates attentional shifts. A valid spatial cue improved participants' performance as compared to an invalid one only in trials in which target onset was highly predictable because of its more robust occurrence. Conversely, cuing proved ineffective when spatial cue and target were associated according to a less frequent asynchrony. These patterns of response depended on asynchronies' probability and not on their duration. Our findings clearly demonstrate that through a fine decision-making, performed trial-by-trial, the brain utilizes implicit information to decide whether or not voluntarily shifting spatial attention. As if according to a cost-planning strategy, the cognitive effort of shifting attention depending on the cue is performed only when the expected advantages are higher. In a trade-off competition for cognitive resources, voluntary/automatic attending may thus be a more complex process than expected. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. ERG review of containment failure probability and repository functional design criteria

    International Nuclear Information System (INIS)

    Gopal, S.

    1986-06-01

    The Engineering Review Group (ERG) was established by the Office of Nuclear Waste Isolation (ONWI) to help evaluate engineering-related issues in the US Department of Energy's nuclear waste repository program. The June 1984 meeting of the ERG considered two topics: (1) statistical probability for containment of nuclides within the waste package and (2) repository design criteria. This report documents the ERG's comments and recommendations on these two subjects and the ONWI response to the specific points raised by ERG

  17. Performance Based Failure Criteria of the Base Isolation System for Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Jung Han; Kim, Min Kyu; Choi, In Kil

    2013-01-01

    The realistic approach to evaluate the failure state of the base isolation system is necessary. From this point of view, several concerns are reviewed and discussed in this study. This is the preliminary study for the performance based risk assessment of a base isolated nuclear power plant. The items to evaluate the capacity and response of an individual base isolator and a base isolation system were briefly outlined. However, the methodology to evaluate the realistic fragility of a base isolation system still needs to be specified. For the quantification of the seismic risk for a nuclear power plant structure, the failure probabilities of the structural component for the various seismic intensity levels need to be calculated. The failure probability is evaluated as the probability when the seismic response of a structure exceeds the failure criteria. Accordingly, the failure mode of the structural system caused by an earthquake vibration should be defined first. The type of a base isolator appropriate for a nuclear power plant structure is regarded as an elastometric rubber bearing with a lead core. The failure limit of the lead-rubber bearing (LRB) is not easy to be predicted because of its high nonlinearity and a complex loading condition by an earthquake excitation. Furthermore, the failure mode of the LRB system installed below the nuclear island cannot be simply determined because the basemat can be sufficiently supported if the number of damaged isolator is not much

  18. Single-variant and multi-variant trend tests for genetic association with next-generation sequencing that are robust to sequencing error.

    Science.gov (United States)

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek

    2012-01-01

    lower power than the corresponding single-variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p value, no matter how many loci. Copyright © 2013 S. Karger AG, Basel.

  19. Genetic Variants Involved in Mitochondrial Oxidative Metabolism are associated with Type 2 Diabetes Mellitus in studies of 8,441 Danes

    DEFF Research Database (Denmark)

    Snogdal, Lena Sønder; Henriksen, Jan Erik; Beck-Nielsen, Henning

      Aims: Type 2 Diabetes (T2D) is characterized by insulin resistance and failure of the pancreatic beta cells to compensate for this defect. Several studies have demonstrated a link between insulin resistance and impaired mitochondrial oxidative phosphorylation (OxPhos) in skeletal muscle. Recently...... by the Diabetes Genetics Replication And Meta-analysis Consortium (DIAGRAM), we found that among 1284 SNPs in 119 OxPhos genes, 39 SNPs in 7 genes showed potential association with T2D (p0.8). One SNP...... a surrogate marker (BIG-AIR) for insulin secretion and variants in COX5B (rs11904110) and COX10 (rs10521253), and between fasting p-glucose and a variant in COX5B (rs11904110) and 2-h post-OGTT plasma glucose and a variant in NDUFV3 (rs8134542) (pgenetic variants...

  20. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  1. A short walk in quantum probability

    Science.gov (United States)

    Hudson, Robin

    2018-04-01

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas. This article is part of the themed issue `Hilbert's sixth problem'.

  2. A short walk in quantum probability.

    Science.gov (United States)

    Hudson, Robin

    2018-04-28

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas.This article is part of the themed issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  3. Effect of Preconditioning and Soldering on Failures of Chip Tantalum Capacitors

    Science.gov (United States)

    Teverovsky, Alexander A.

    2014-01-01

    Soldering of molded case tantalum capacitors can result in damage to Ta205 dielectric and first turn-on failures due to thermo-mechanical stresses caused by CTE mismatch between materials used in the capacitors. It is also known that presence of moisture might cause damage to plastic cases due to the pop-corning effect. However, there are only scarce literature data on the effect of moisture content on the probability of post-soldering electrical failures. In this work, that is based on a case history, different groups of similar types of CWR tantalum capacitors from two lots were prepared for soldering by bake, moisture saturation, and longterm storage at room conditions. Results of the testing showed that both factors: initial quality of the lot, and preconditioning affect the probability of failures. Baking before soldering was shown to be effective to prevent failures even in lots susceptible to pop-corning damage. Mechanism of failures is discussed and recommendations for pre-soldering bake are suggested based on analysis of moisture characteristics of materials used in the capacitors' design.

  4. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  5. High-output cardiac failure secondary to multiple vascular malformations in the liver: case report

    International Nuclear Information System (INIS)

    Spaner, S.; Demeter, S.; Lien, D.; Shapiro, J.; McCarthy, M.; Raymond, G.

    2001-01-01

    High-output cardiac failure is associated with several systemic illnesses, including hyperthyroidism, thiamine deficiency, severe anemia, multiple myeloma, Paget's disease of bone and Osler-Weber-Rendu syndrome. We present an unusual case of a woman with high-output cardiac failure as a result of multiple arteriovenous fistulas in the liver, most likely representing an unusual variant of Osler-Weber-Rendu syndrome (i.e., no other telangiectasias or a family history of vascular malformations was demonstrated). (author)

  6. The prediction problems of VVER fuel element cladding failure theory

    International Nuclear Information System (INIS)

    Pelykh, S.N.; Maksimov, M.V.; Ryabchikov, S.D.

    2016-01-01

    Highlights: • Fuel cladding failure forecasting is based on the fuel load history and the damage distribution. • The limit damage parameter is exceeded, though limit stresses are not reached. • The damage parameter plays a significant role in predicting the cladding failure. • The proposed failure probability criterion can be used to control the cladding tightness. - Abstract: A method for forecasting of VVER fuel element (FE) cladding failure due to accumulation of deformation damage parameter, taking into account the fuel assembly (FA) loading history and the damage parameter distribution among FEs included in the FA, has been developed. Using the concept of conservative FE groups, it is shown that the safety limit for damage parameter is exceeded for some FA rearrangement, though the limits for circumferential and equivalent stresses are not reached. This new result contradicts the wide-spread idea that the damage parameter value plays a minor role when estimating the limiting state of cladding. The necessary condition of rearrangement algorithm admissibility and the criterion for minimization of the probability of cladding failure due to damage parameter accumulation have been derived, for using in automated systems controlling the cladding tightness.

  7. Effect of a certain class of potential common mode failures on the reliability of redundant systems

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1975-11-01

    This is a theoretical investigation of the importance of common mode failures on the reliability of redundant systems. These failures are assumed to be the result of fatal shocks (e.g., from earthquakes, explosions, etc.) which occur at a constant rate. This formulation makes it possible to predict analytically results obtained in the past which showed that the probability of a common mode failure of the redundant channels of the protection system of a typical nuclear power plant was orders of magnitude larger than the probability of failure from chance failures alone. Furthermore, since most reliability analyses of redundant systems do not include potential common mode failures in the probabilistic calculations, criteria are established which can be used to decide either that the common-mode-failure effects are indeed insignificant or that such calculations are meaningless, and more sophisticated methods of analysis are required, because common mode failures cannot be ignored

  8. Influenza infection and heart failure-vaccination may change heart failure prognosis?

    Science.gov (United States)

    Kadoglou, Nikolaos P E; Bracke, Frank; Simmers, Tim; Tsiodras, Sotirios; Parissis, John

    2017-05-01

    The interaction of influenza infection with the pathogenesis of acute heart failure (AHF) and the worsening of chronic heart failure (CHF) is rather complex. The deleterious effects of influenza infection on AHF/CHF can be attenuated by specific immunization. Our review aimed to summarize the efficacy, effectiveness, safety, and dosage of anti-influenza vaccination in HF. In this literature review, we searched MEDLINE and EMBASE from January 1st 1966 to December 31st, 2016, for studies examining the association between AHF/CHF, influenza infections, and anti-influenza immunizations. We used broad criteria to increase the sensitivity of the search. HF was a prerequisite for our search. The search fields used included "heart failure," "vaccination," "influenza," "immunization" along with variants of these terms. No restrictions on the type of study design were applied. The most common clinical scenario is exacerbation of pre-existing CHF by influenza infection. Scarce evidence supports a potential positive association of influenza infection with AHF. Vaccinated patients with pre-existing CHF have reduced all-cause morbidity and mortality, but effects are not consistently documented. Immunization with higher antigen quantity may confer additional protection, but such aggressive approach has not been generally advocated. Further studies are needed to delineate the role of influenza infection on AHF/CHF pathogenesis and maintenance. Annual anti-influenza vaccination appears to be an effective measure for secondary prevention in HF. Better immunization strategies and more efficacious vaccines are urgently necessary.

  9. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  10. Contrasting roles of the ABCG2 Q141K variant in prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Sobek, Kathryn M. [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Cummings, Jessica L. [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA (United States); Bacich, Dean J. [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Department of Urology, University of Texas Health Science Center, San Antonio, TX (United States); O’Keefe, Denise S., E-mail: OKeefeD@uthscsa.edu [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Department of Urology, University of Texas Health Science Center, San Antonio, TX (United States)

    2017-05-01

    ABCG2 is a membrane transport protein that effluxes growth-promoting molecules, such as folates and dihydrotestosterone, as well as chemotherapeutic agents. Therefore it is important to determine how variants of ABCG2 affect the transporter function in order to determine whether modified treatment regimens may be necessary for patients harboring ABCG2 variants. Previous studies have demonstrated an association between the ABCG2 Q141K variant and overall survival after a prostate cancer diagnosis. We report here that in patients with recurrent prostate cancer, those who carry the ABCG2 Q141K variant had a significantly shorter time to PSA recurrence post-prostatectomy than patients homozygous for wild-type ABCG2 (P=0.01). Transport studies showed that wild-type ABCG2 was able to efflux more folic acid than the Q141K variant (P<0.002), suggesting that retained tumoral folate contributes to the decreased time to PSA recurrence in the Q141K variant patients. In a seemingly conflicting study, it was previously reported that docetaxel-treated Q141K variant prostate cancer patients have a longer survival time. We found this may be due to less efficient docetaxel efflux in cells with the Q141K variant versus wild-type ABCG2. In human prostate cancer tissues, confocal microscopy revealed that all genotypes had a mixture of cytoplasmic and plasma membrane staining, with noticeably less staining in the two homozygous KK patients. In conclusion, the Q141K variant plays contrasting roles in prostate cancer: 1) by decreasing folate efflux, increased intracellular folate levels result in enhanced tumor cell proliferation and therefore time to recurrence decreases; and 2) in patients treated with docetaxel, by decreasing its efflux, intratumoral docetaxel levels and tumor cell drug sensitivity increase and therefore patient survival time increases. Taken together, these data suggest that a patient's ABCG2 genotype may be important when determining a personalized treatment

  11. European external quality control study on the competence of laboratories to recognize rare sequence variants resulting in unusual genotyping results.

    Science.gov (United States)

    Márki-Zay, János; Klein, Christoph L; Gancberg, David; Schimmel, Heinz G; Dux, László

    2009-04-01

    Depending on the method used, rare sequence variants adjacent to the single nucleotide polymorphism (SNP) of interest may cause unusual or erroneous genotyping results. Because such rare variants are known for many genes commonly tested in diagnostic laboratories, we organized a proficiency study to assess their influence on the accuracy of reported laboratory results. Four external quality control materials were processed and sent to 283 laboratories through 3 EQA organizers for analysis of the prothrombin 20210G>A mutation. Two of these quality control materials contained sequence variants introduced by site-directed mutagenesis. One hundred eighty-nine laboratories participated in the study. When samples gave a usual result with the method applied, the error rate was 5.1%. Detailed analysis showed that more than 70% of the failures were reported from only 9 laboratories. Allele-specific amplification-based PCR had a much higher error rate than other methods (18.3% vs 2.9%). The variants 20209C>T and [20175T>G; 20179_20180delAC] resulted in unusual genotyping results in 67 and 85 laboratories, respectively. Eighty-three (54.6%) of these unusual results were not recognized, 32 (21.1%) were attributed to technical issues, and only 37 (24.3%) were recognized as another sequence variant. Our findings revealed that some of the participating laboratories were not able to recognize and correctly interpret unusual genotyping results caused by rare SNPs. Our study indicates that the majority of the failures could be avoided by improved training and careful selection and validation of the methods applied.

  12. Time-variant coherence between heart rate variability and EEG activity in epileptic patients: an advanced coupling analysis between physiological networks

    International Nuclear Information System (INIS)

    Piper, D; Schiecke, K; Pester, B; Witte, H; Benninger, F; Feucht, M

    2014-01-01

    Time-variant coherence analysis between the heart rate variability (HRV) and the channel-related envelopes of adaptively selected EEG components was used as an indicator for the occurrence of (correlative) couplings between the central autonomic network (CAN) and the epileptic network before, during and after epileptic seizures. Two groups of patients were investigated, a group with left and a group with right hemispheric temporal lobe epilepsy. The individual EEG components were extracted by a signal-adaptive approach, the multivariate empirical mode decomposition, and the envelopes of each resulting intrinsic mode function (IMF) were computed by using Hilbert transform. Two IMFs, whose envelopes were strongly correlated with the HRV’s low-frequency oscillation (HRV-LF; ≈0.1 Hz) before and after the seizure were identified. The frequency ranges of these IMFs correspond to the EEG delta-band. The time-variant coherence was statistically quantified and tensor decomposition of the time-frequency coherence maps was applied to explore the topography-time-frequency characteristics of the coherence analysis. Results allow the hypothesis that couplings between the CAN, which controls the cardiovascular-cardiorespiratory system, and the ‘epileptic neural network’ exist. Additionally, our results confirm the hypothesis of a right hemispheric lateralization of sympathetic cardiac control of the HRV-LF. (paper)

  13. Robust Guaranteed Cost Observer Design for Singular Markovian Jump Time-Delay Systems with Generally Incomplete Transition Probability

    Directory of Open Access Journals (Sweden)

    Yanbo Li

    2014-01-01

    Full Text Available This paper is devoted to the investigation of the design of robust guaranteed cost observer for a class of linear singular Markovian jump time-delay systems with generally incomplete transition probability. In this singular model, each transition rate can be completely unknown or only its estimate value is known. Based on stability theory of stochastic differential equations and linear matrix inequality (LMI technique, we design an observer to ensure that, for all uncertainties, the resulting augmented system is regular, impulse free, and robust stochastically stable with the proposed guaranteed cost performance. Finally, a convex optimization problem with LMI constraints is formulated to design the suboptimal guaranteed cost filters for linear singular Markovian jump time-delay systems with generally incomplete transition probability.

  14. Dynamic SEP event probability forecasts

    Science.gov (United States)

    Kahler, S. W.; Ling, A.

    2015-10-01

    The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.

  15. Some possible causes and probability of leakages in LMFBR steam generators

    International Nuclear Information System (INIS)

    Bolt, P.R.

    1984-01-01

    Relevant operational experience with steam generators for process and conventional plant and thermal and fast reactors is reviewed. Possible causes of water/steam leakages into sodium/gas are identified and data is given on the conditions necessary for failure, leakage probability and type of leakage path. (author)

  16. DETERMINATION OF THE RESIDUAL OPERATING TIME OF UNRESTORABLE ELEMENT OF THE ELECTRIC POWER OBJECT AT THE WAYBALL DISTRIBUTION

    International Nuclear Information System (INIS)

    Namgaladze, D.; Gurgenidze, D.

    2007-01-01

    In practice, it is often essential to determinethe residual operating time for an unrestorable element of the electric power object which has operated without failure for a certain time. The density of probability distribution of operating time can be determined from the initial probability distribution of operating time. In this work, the relations for determination of the function of residual operating time of the unrestorable element at the exponential and Wayball distributions are analytically derived. (author)

  17. High-Temperature Graphitization Failure of Primary Superheater Tube

    Science.gov (United States)

    Ghosh, D.; Ray, S.; Roy, H.; Mandal, N.; Shukla, A. K.

    2015-12-01

    Failure of boiler tubes is the main cause of unit outages of the plant, which further affects the reliability, availability and safety of the unit. So failure analysis of boiler tubes is absolutely essential to predict the root cause of the failure and the steps are taken for future remedial action to prevent the failure in near future. This paper investigates the probable cause/causes of failure of the primary superheater tube in a thermal power plant boiler. Visual inspection, dimensional measurement, chemical analysis, metallographic examination and hardness measurement are conducted as the part of the investigative studies. Apart from these tests, mechanical testing and fractographic analysis are also conducted as supplements. Finally, it is concluded that the superheater tube is failed due to graphitization for prolonged exposure of the tube at higher temperature.

  18. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  19. Machine learning in heart failure: ready for prime time.

    Science.gov (United States)

    Awan, Saqib Ejaz; Sohel, Ferdous; Sanfilippo, Frank Mario; Bennamoun, Mohammed; Dwivedi, Girish

    2018-03-01

    The aim of this review is to present an up-to-date overview of the application of machine learning methods in heart failure including diagnosis, classification, readmissions and medication adherence. Recent studies have shown that the application of machine learning techniques may have the potential to improve heart failure outcomes and management, including cost savings by improving existing diagnostic and treatment support systems. Recently developed deep learning methods are expected to yield even better performance than traditional machine learning techniques in performing complex tasks by learning the intricate patterns hidden in big medical data. The review summarizes the recent developments in the application of machine and deep learning methods in heart failure management.

  20. A new method for explicit modelling of single failure event within different common cause failure groups

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Redundancy and diversity are the main principles of the safety systems in the nuclear industry. Implementation of safety components redundancy has been acknowledged as an effective approach for assuring high levels of system reliability. The existence of redundant components, identical in most of the cases, implicates a probability of their simultaneous failure due to a shared cause—a common cause failure. This paper presents a new method for explicit modelling of single component failure event within multiple common cause failure groups simultaneously. The method is based on a modification of the frequently utilised Beta Factor parametric model. The motivation for development of this method lays in the fact that one of the most widespread softwares for fault tree and event tree modelling as part of the probabilistic safety assessment does not comprise the option for simultaneous assignment of single failure event to multiple common cause failure groups. In that sense, the proposed method can be seen as an advantage of the explicit modelling of common cause failures. A standard standby safety system is selected as a case study for application and study of the proposed methodology. The results and insights implicate improved, more transparent and more comprehensive models within probabilistic safety assessment.