Reliability analysis of software based safety functions
Pulkkinen, U.
1993-05-01
The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)
The effect of loss functions on empirical Bayes reliability analysis
Camara Vincent A. R.
1998-01-01
Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results. It is shown that empirical Bayes reliability functions are in general sensitive to the choice of the loss function, and that the squared error loss does not always yield the best empirical Bayes reliability estimate.
The effect of loss functions on empirical Bayes reliability analysis
Vincent A. R. Camara
1999-01-01
Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results.
LIF: A new Kriging based learning function and its application to structural reliability analysis
Sun, Zhili; Wang, Jian; Li, Rui; Tong, Cao
2017-01-01
The main task of structural reliability analysis is to estimate failure probability of a studied structure taking randomness of input variables into account. To consider structural behavior practically, numerical models become more and more complicated and time-consuming, which increases the difficulty of reliability analysis. Therefore, sequential strategies of design of experiment (DoE) are raised. In this research, a new learning function, named least improvement function (LIF), is proposed to update DoE of Kriging based reliability analysis method. LIF values how much the accuracy of estimated failure probability will be improved if adding a given point into DoE. It takes both statistical information provided by the Kriging model and the joint probability density function of input variables into account, which is the most important difference from the existing learning functions. Maximum point of LIF is approximately determined with Markov Chain Monte Carlo(MCMC) simulation. A new reliability analysis method is developed based on the Kriging model, in which LIF, MCMC and Monte Carlo(MC) simulation are employed. Three examples are analyzed. Results show that LIF and the new method proposed in this research are very efficient when dealing with nonlinear performance function, small probability, complicated limit state and engineering problems with high dimension. - Highlights: • Least improvement function (LIF) is proposed for structural reliability analysis. • LIF takes both Kriging based statistical information and joint PDF into account. • A reliability analysis method is constructed based on Kriging, MCS and LIF.
Power electronics reliability analysis.
Smith, Mark A.; Atcitty, Stanley
2009-12-01
This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.
Mechanical system reliability analysis using a combination of graph theory and Boolean function
Tang, J.
2001-01-01
A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis
Analysis and Application of Reliability
Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju
1999-05-01
This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.
Reliability analysis and component functional allocations for the ESF multi-loop controller design
Hur, Seop; Kim, D.H.; Choi, J.K.; Park, J.C.; Seong, S.H.; Lee, D.Y.
2006-01-01
This paper deals with the reliability analysis and component functional allocations to ensure the enhanced system reliability and availability. In the Engineered Safety Features, functionally dependent components are controlled by a multi-loop controller. The system reliability of the Engineered Safety Features-Component Control System, especially, the multi-loop controller which is changed comparing to the conventional controllers is an important factor for the Probability Safety Assessment in the nuclear field. To evaluate the multi-loop controller's failure rate of the k-out-of-m redundant system, the binomial process is used. In addition, the component functional allocation is performed to tolerate a single multi-loop controller failure without the loss of vital operation within the constraints of the piping and component configuration, and ensure that mechanically redundant components remain functional. (author)
Dougherty, E.M.; Fragola, J.R.
1988-01-01
The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach
Multinomial-exponential reliability function: a software reliability model
Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara
2003-01-01
The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis
Reliability analysis of shutdown system
Kumar, C. Senthil; John Arul, A.; Pal Singh, Om; Suryaprakasa Rao, K.
2005-01-01
This paper presents the results of reliability analysis of Shutdown System (SDS) of Indian Prototype Fast Breeder Reactor. Reliability analysis carried out using Fault Tree Analysis predicts a value of 3.5 x 10 -8 /de for failure of shutdown function in case of global faults and 4.4 x 10 -8 /de for local faults. Based on 20 de/y, the frequency of shutdown function failure is 0.7 x 10 -6 /ry, which meets the reliability target, set by the Indian Atomic Energy Regulatory Board. The reliability is limited by Common Cause Failure (CCF) of actuation part of SDS and to a lesser extent CCF of electronic components. The failure frequency of individual systems is -3 /ry, which also meets the safety criteria. Uncertainty analysis indicates a maximum error factor of 5 for the top event unavailability
Regression analysis of the structure function for reliability evaluation of continuous-state system
Gamiz, M.L.; Martinez Miranda, M.D.
2010-01-01
Technical systems are designed to perform an intended task with an admissible range of efficiency. According to this idea, it is permissible that the system runs among different levels of performance, in addition to complete failure and the perfect functioning one. As a consequence, reliability theory has evolved from binary-state systems to the most general case of continuous-state system, in which the state of the system changes over time through some interval on the real number line. In this context, obtaining an expression for the structure function becomes difficult, compared to the discrete case, with difficulty increasing as the number of components of the system increases. In this work, we propose a method to build a structure function for a continuum system by using multivariate nonparametric regression techniques, in which certain analytical restrictions on the variable of interest must be taken into account. Once the structure function is obtained, some reliability indices of the system are estimated. We illustrate our method via several numerical examples.
Regression analysis of the structure function for reliability evaluation of continuous-state system
Gamiz, M.L., E-mail: mgamiz@ugr.e [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain); Martinez Miranda, M.D. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)
2010-02-15
Technical systems are designed to perform an intended task with an admissible range of efficiency. According to this idea, it is permissible that the system runs among different levels of performance, in addition to complete failure and the perfect functioning one. As a consequence, reliability theory has evolved from binary-state systems to the most general case of continuous-state system, in which the state of the system changes over time through some interval on the real number line. In this context, obtaining an expression for the structure function becomes difficult, compared to the discrete case, with difficulty increasing as the number of components of the system increases. In this work, we propose a method to build a structure function for a continuum system by using multivariate nonparametric regression techniques, in which certain analytical restrictions on the variable of interest must be taken into account. Once the structure function is obtained, some reliability indices of the system are estimated. We illustrate our method via several numerical examples.
Multidisciplinary System Reliability Analysis
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin
2015-03-01
Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.
Waste package reliability analysis
Pescatore, C.; Sastre, C.
1983-01-01
Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table
Lee, Kangjoo; Lina, Jean-Marc; Gotman, Jean; Grova, Christophe
2016-07-01
Functional hubs are defined as the specific brain regions with dense connections to other regions in a functional brain network. Among them, connector hubs are of great interests, as they are assumed to promote global and hierarchical communications between functionally specialized networks. Damage to connector hubs may have a more crucial effect on the system than does damage to other hubs. Hubs in graph theory are often identified from a correlation matrix, and classified as connector hubs when the hubs are more connected to regions in other networks than within the networks to which they belong. However, the identification of hubs from functional data is more complex than that from structural data, notably because of the inherent problem of multicollinearity between temporal dynamics within a functional network. In this context, we developed and validated a method to reliably identify connectors and corresponding overlapping network structure from resting-state fMRI. This new method is actually handling the multicollinearity issue, since it does not rely on counting the number of connections from a thresholded correlation matrix. The novelty of the proposed method is that besides counting the number of networks involved in each voxel, it allows us to identify which networks are actually involved in each voxel, using a data-driven sparse general linear model in order to identify brain regions involved in more than one network. Moreover, we added a bootstrap resampling strategy to assess statistically the reproducibility of our results at the single subject level. The unified framework is called SPARK, i.e. SParsity-based Analysis of Reliable k-hubness, where k-hubness denotes the number of networks overlapping in each voxel. The accuracy and robustness of SPARK were evaluated using two dimensional box simulations and realistic simulations that examined detection of artificial hubs generated on real data. Then, test/retest reliability of the method was assessed
Integrated system reliability analysis
Gintautas, Tomas; Sørensen, John Dalsgaard
Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...
Structural systems reliability analysis
Frangopol, D.
1975-01-01
For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de
Improvement of standards on functional reliability of electric power systems
Barinov, V.A.; Volkov, G.A.; Kalita, V.V.; Kogan, F.L.; Makarov, S.F.; Manevich, A.S.; Mogirev, V.V.; Sin'chugov, F.I.; Skopintsev, V.A.; Khvoshchinskaya, Z.G.
1993-01-01
Analysis of the most principal aspects of the existing standards and requirements on assuring safety and stability of electric power systems (EPS) and effective (reliable and economical) power supply of consumers is given. The reliability is determined as ability to accomplish the assigned functions. Basic recommendations on improving the standards regulating the safety and reliability of the NPP functioning are formulated
A reliability simulation language for reliability analysis
Deans, N.D.; Miller, A.J.; Mann, D.P.
1986-01-01
The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)
Scyllac equipment reliability analysis
Gutscher, W.D.; Johnson, K.J.
1975-01-01
Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace
Integrating reliability analysis and design
Rasmuson, D.M.
1980-10-01
This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems
Uppuluri, V.R.R.
1979-01-01
Mathematical foundations of risk analysis are addressed. The importance of having the same probability space in order to compare different experiments is pointed out. Then the following topics are discussed: consequences as random variables with infinite expectations; the phenomenon of rare events; series-parallel systems and different kinds of randomness that could be imposed on such systems; and the problem of consensus of estimates of expert opinion
System Reliability Analysis Considering Correlation of Performances
Kim, Saekyeol; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Lim, Woochul [Mando Corporation, Seongnam (Korea, Republic of)
2017-04-15
Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.
System Reliability Analysis Considering Correlation of Performances
Kim, Saekyeol; Lee, Tae Hee; Lim, Woochul
2017-01-01
Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.
Reliability analysis and operator modelling
Hollnagel, Erik
1996-01-01
The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed
On Bayesian System Reliability Analysis
Soerensen Ringi, M
1995-05-01
The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.
On Bayesian System Reliability Analysis
Soerensen Ringi, M.
1995-01-01
The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs
Reliability Analysis of Wind Turbines
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard
2008-01-01
In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...
Reliability analysis in intelligent machines
Mcinroy, John E.; Saridis, George N.
1990-01-01
Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.
RELIABILITY ANALYSIS OF BENDING ELIABILITY ANALYSIS OF ...
eobe
Reliability analysis of the safety levels of the criteria slabs, have been .... was also noted [2] that if the risk level or β < 3.1), the ... reliability analysis. A study [6] has shown that all geometric variables, ..... Germany, 1988. 12. Hasofer, A. M and ...
Reliability analysis under epistemic uncertainty
Nannapaneni, Saideep; Mahadevan, Sankaran
2016-01-01
This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.
Reliability analysis techniques in power plant design
Chang, N.E.
1981-01-01
An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)
Reliability Analysis of a Steel Frame
M. Sýkora
2002-01-01
Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.
A reliability analysis tool for SpaceWire network
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
Human Reliability Analysis: session summary
Hall, R.E.
1985-01-01
The use of Human Reliability Analysis (HRA) to identify and resolve human factors issues has significantly increased over the past two years. Today, utilities, research institutions, consulting firms, and the regulatory agency have found a common application of HRA tools and Probabilistic Risk Assessment (PRA). The ''1985 IEEE Third Conference on Human Factors and Power Plants'' devoted three sessions to the discussion of these applications and a review of the insights so gained. This paper summarizes the three sessions and presents those common conclusions that were discussed during the meeting. The paper concludes that session participants supported the use of an adequately documented ''living PRA'' to address human factors issues in design and procedural changes, regulatory compliance, and training and that the techniques can produce cost effective qualitative results that are complementary to more classical human factors methods
Fundamentals and applications of systems reliability analysis
Boesebeck, K.; Heuser, F.W.; Kotthoff, K.
1976-01-01
The lecture gives a survey on the application of methods of reliability analysis to assess the safety of nuclear power plants. Possible statements of reliability analysis in connection with specifications of the atomic licensing procedure are especially dealt with. Existing specifications of safety criteria are additionally discussed with the help of reliability analysis by the example of the reliability analysis of a reactor protection system. Beyond the limited application to single safety systems, the significance of reliability analysis for a closed risk concept is explained in the last part of the lecture. (orig./LH) [de
Sensitivity analysis in optimization and reliability problems
Castillo, Enrique; Minguez, Roberto; Castillo, Carmen
2008-01-01
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods
Sensitivity analysis in optimization and reliability problems
Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es
2008-12-15
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.
Kantorovich, L V
1982-01-01
Functional Analysis examines trends in functional analysis as a mathematical discipline and the ever-increasing role played by its techniques in applications. The theory of topological vector spaces is emphasized, along with the applications of functional analysis to applied analysis. Some topics of functional analysis connected with applications to mathematical economics and control theory are also discussed. Comprised of 18 chapters, this book begins with an introduction to the elements of the theory of topological spaces, the theory of metric spaces, and the theory of abstract measure space
Developing safety performance functions incorporating reliability-based risk measures.
Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek
2011-11-01
Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reliability analysis in interdependent smart grid systems
Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong
2018-06-01
Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.
Stochastic reliability analysis using Fokker Planck equations
Hari Prasad, M.; Rami Reddy, G.; Srividya, A.; Verma, A.K.
2011-01-01
The Fokker-Planck equation describes the time evolution of the probability density function of the velocity of a particle, and can be generalized to other observables as well. It is also known as the Kolmogorov forward equation (diffusion). Hence, for any process, which evolves with time, the probability density function as a function of time can be represented with Fokker-Planck equation. In stochastic reliability analysis one is more interested in finding out the reliability or failure probability of the components or structures as a function of time rather than instantaneous failure probabilities. In this analysis the variables are represented with random processes instead of random variables. A random processes can be either stationary or non stationary. If the random process is stationary then the failure probability doesn't change with time where as in the case of non stationary processes the failure probability changes with time. In the present paper Fokker Planck equations have been used to find out the probability density function of the non stationary random processes. In this paper a flow chart has been provided which describes step by step process for carrying out stochastic reliability analysis using Fokker-Planck equations. As a first step one has to identify the failure function as a function of random processes. Then one has to solve the Fokker-Planck equation for each random process. In this paper the Fokker-Planck equation has been solved by using Finite difference method. As a result one gets the probability density values of the random process in the sample space as well as time space. Later at each time step appropriate probability distribution has to be identified based on the available probability density values. For checking the better fitness of the data Kolmogorov-Smirnov Goodness of fit test has been performed. In this way one can find out the distribution of the random process at each time step. Once one has the probability distribution
Cost analysis of reliability investigations
Schmidt, F.
1981-01-01
Taking Epsteins testing theory as a basis, premisses are formulated for the selection of cost-optimized reliability inspection plans. Using an example, the expected testing costs and inspection time periods of various inspection plan types, standardized on the basis of the exponential distribution, are compared. It can be shown that sequential reliability tests usually involve lower costs than failure or time-fixed tests. The most 'costly' test is to be expected with the inspection plan type NOt. (orig.) [de
Reliability analysis of containment isolation systems
Pelto, P.J.; Counts, C.A.
1984-06-01
The Pacific Northwest Laboratory (PNL) is reviewing available information on containment systems design, operating experience, and related research as part of a project being conducted by the Division of Systems Integration, US Nuclear Regulatory Commission. The basic objective of this work is to collect and consolidate data relevant to assessing the functional performance of containment isolation systems and to use this data to the extent possible to characterize containment isolation system reliability for selected reference designs. This paper summarizes the results from initial efforts which focused on collection of data from available documents and briefly describes detailed review and analysis efforts which commenced recently. 5 references
Integrated Reliability and Risk Analysis System (IRRAS)
Russell, K.D.; McKay, M.K.; Sattison, M.B.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.
1992-01-01
The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance
Reliability Analysis of Money Habitudes
Delgadillo, Lucy M.; Bushman, Brittani S.
2015-01-01
Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…
Power system reliability analysis using fault trees
Volkanovski, A.; Cepin, M.; Mavko, B.
2006-01-01
The power system reliability analysis method is developed from the aspect of reliable delivery of electrical energy to customers. The method is developed based on the fault tree analysis, which is widely applied in the Probabilistic Safety Assessment (PSA). The method is adapted for the power system reliability analysis. The method is developed in a way that only the basic reliability parameters of the analysed power system are necessary as an input for the calculation of reliability indices of the system. The modeling and analysis was performed on an example power system consisting of eight substations. The results include the level of reliability of current power system configuration, the combinations of component failures resulting in a failed power delivery to loads, and the importance factors for components and subsystems. (author)
Advanced Functionalities for Highly Reliable Optical Networks
An, Yi
This thesis covers two research topics concerning optical solutions for networks e.g. avionic systems. One is to identify the applications for silicon photonic devices for cost-effective solutions in short-range optical networks. The other one is to realise advanced functionalities in order...... to increase the availability of highly reliable optical networks. A cost-effective transmitter based on a directly modulated laser (DML) using a silicon micro-ring resonator (MRR) to enhance its modulation speed is proposed, analysed and experimentally demonstrated. A modulation speed enhancement from 10 Gbit...... interconnects and network-on-chips. A novel concept of all-optical protection switching scheme is proposed, where fault detection and protection trigger are all implemented in the optical domain. This scheme can provide ultra-fast establishment of the protection path resulting in a minimum loss of data...
Finite element reliability analysis of fatigue life
Harkness, H.H.; Belytschko, T.; Liu, W.K.
1992-01-01
Fatigue reliability is addressed by the first-order reliability method combined with a finite element method. Two-dimensional finite element models of components with cracks in mode I are considered with crack growth treated by the Paris law. Probability density functions of the variables affecting fatigue are proposed to reflect a setting where nondestructive evaluation is used, and the Rosenblatt transformation is employed to treat non-Gaussian random variables. Comparisons of the first-order reliability results and Monte Carlo simulations suggest that the accuracy of the first-order reliability method is quite good in this setting. Results show that the upper portion of the initial crack length probability density function is crucial to reliability, which suggests that if nondestructive evaluation is used, the probability of detection curve plays a key role in reliability. (orig.)
Reliability analysis of reactor pressure vessel intensity
Zheng Liangang; Lu Yongbo
2012-01-01
This paper performs the reliability analysis of reactor pressure vessel (RPV) with ANSYS. The analysis method include direct Monte Carlo Simulation method, Latin Hypercube Sampling, central composite design and Box-Behnken Matrix design. The RPV integrity reliability under given input condition is proposed. The result shows that the effects on the RPV base material reliability are internal press, allowable basic stress and elasticity modulus of base material in descending order, and the effects on the bolt reliability are allowable basic stress of bolt material, preload of bolt and internal press in descending order. (authors)
Reliability Analysis of Adhesive Bonded Scarf Joints
Kimiaeifar, Amin; Toft, Henrik Stensgaard; Lund, Erik
2012-01-01
element analysis (FEA). For the reliability analysis a design equation is considered which is related to a deterministic code-based design equation where reliability is secured by partial safety factors together with characteristic values for the material properties and loads. The failure criteria......A probabilistic model for the reliability analysis of adhesive bonded scarfed lap joints subjected to static loading is developed. It is representative for the main laminate in a wind turbine blade subjected to flapwise bending. The structural analysis is based on a three dimensional (3D) finite...... are formulated using a von Mises, a modified von Mises and a maximum stress failure criterion. The reliability level is estimated for the scarfed lap joint and this is compared with the target reliability level implicitly used in the wind turbine standard IEC 61400-1. A convergence study is performed to validate...
Time-dependent reliability sensitivity analysis of motion mechanisms
Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng
2016-01-01
Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.
Research on Connection and Function Reliability of the Oil＆Gas Pipeline System
Xu Bo
2017-01-01
Full Text Available Pipeline transportation is the optimal way for energy delivery in terms of safety, efficiency and environmental protection. Because of the complexity of pipeline external system including geological hazards, social and cultural influence, it is a great challenge to operate the pipeline safely and reliable. Therefore, the pipeline reliability becomes an important issue. Based on the classical reliability theory, the analysis of pipeline system is carried out, then the reliability model of the pipeline system is built, and the calculation is addressed thereafter. Further the connection and function reliability model is applied to a practical active pipeline system, with the use of the proposed methodology of the pipeline system; the connection reliability and function reliability are obtained. This paper firstly presented to considerate the connection and function reliability separately and obtain significant contribution to establish the mathematical reliability model of pipeline system, hence provide fundamental groundwork for the pipeline reliability research in the future.
Reliability analysis using network simulation
Engi, D.
1985-01-01
The models that can be used to provide estimates of the reliability of nuclear power systems operate at many different levels of sophistication. The least-sophisticated models treat failure processes that entail only time-independent phenomena (such as demand failure). More advanced models treat processes that also include time-dependent phenomena such as run failure and possibly repair. However, many of these dynamic models are deficient in some respects because they either disregard the time-dependent phenomena that cannot be expressed in closed-form analytic terms or because they treat these phenomena in quasi-static terms. The next level of modeling requires a dynamic approach that incorporates not only procedures for treating all significant time-dependent phenomena but also procedures for treating these phenomena when they are conditionally linked or characterized by arbitrarily selected probability distributions. The level of sophistication that is required is provided by a dynamic, Monte Carlo modeling approach. A computer code that uses a dynamic, Monte Carlo modeling approach is Q-GERT (Graphical Evaluation and Review Technique - with Queueing), and the present study had demonstrated the feasibility of using Q-GERT for modeling time-dependent, unconditionally and conditionally linked phenomena that are characterized by arbitrarily selected probability distributions
Analysis of information security reliability: A tutorial
Kondakci, Suleyman
2015-01-01
This article presents a concise reliability analysis of network security abstracted from stochastic modeling, reliability, and queuing theories. Network security analysis is composed of threats, their impacts, and recovery of the failed systems. A unique framework with a collection of the key reliability models is presented here to guide the determination of the system reliability based on the strength of malicious acts and performance of the recovery processes. A unique model, called Attack-obstacle model, is also proposed here for analyzing systems with immunity growth features. Most computer science curricula do not contain courses in reliability modeling applicable to different areas of computer engineering. Hence, the topic of reliability analysis is often too diffuse to most computer engineers and researchers dealing with network security. This work is thus aimed at shedding some light on this issue, which can be useful in identifying models, their assumptions and practical parameters for estimating the reliability of threatened systems and for assessing the performance of recovery facilities. It can also be useful for the classification of processes and states regarding the reliability of information systems. Systems with stochastic behaviors undergoing queue operations and random state transitions can also benefit from the approaches presented here. - Highlights: • A concise survey and tutorial in model-based reliability analysis applicable to information security. • A framework of key modeling approaches for assessing reliability of networked systems. • The framework facilitates quantitative risk assessment tasks guided by stochastic modeling and queuing theory. • Evaluation of approaches and models for modeling threats, failures, impacts, and recovery analysis of information systems
Space Mission Human Reliability Analysis (HRA) Project
National Aeronautics and Space Administration — The purpose of this project is to extend current ground-based Human Reliability Analysis (HRA) techniques to a long-duration, space-based tool to more effectively...
Challenges Regarding IP Core Functional Reliability
Berg, Melanie D.; LaBel, Kenneth A.
2017-01-01
For many years, intellectual property (IP) cores have been incorporated into field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design flows. However, the usage of large complex IP cores were limited within products that required a high level of reliability. This is no longer the case. IP core insertion has become mainstream including their use in highly reliable products. Due to limited visibility and control, challenges exist when using IP cores and subsequently compromise product reliability. We discuss challenges and suggest potential solutions to critical application IP insertion.
Reliability analysis of stiff versus flexible piping
Lu, S.C.
1985-01-01
The overall objective of this research project is to develop a technical basis for flexible piping designs which will improve piping reliability and minimize the use of pipe supports, snubbers, and pipe whip restraints. The current study was conducted to establish the necessary groundwork based on the piping reliability analysis. A confirmatory piping reliability assessment indicated that removing rigid supports and snubbers tends to either improve or affect very little the piping reliability. The authors then investigated a couple of changes to be implemented in Regulatory Guide (RG) 1.61 and RG 1.122 aimed at more flexible piping design. They concluded that these changes substantially reduce calculated piping responses and allow piping redesigns with significant reduction in number of supports and snubbers without violating ASME code requirements. Furthermore, the more flexible piping redesigns are capable of exhibiting reliability levels equal to or higher than the original stiffer design. An investigation of the malfunction of pipe whip restraints confirmed that the malfunction introduced higher thermal stresses and tended to reduce the overall piping reliability. Finally, support and component reliabilities were evaluated based on available fragility data. Results indicated that the support reliability usually exhibits a moderate decrease as the piping flexibility increases. Most on-line pumps and valves showed an insignificant reduction in reliability for a more flexible piping design
López-Vallejo, Fabian; Fragoso-Serrano, Mabel; Suárez-Ortiz, Gloria Alejandra; Hernández-Rojas, Adriana C; Cerda-García-Rojas, Carlos M; Pereda-Miranda, Rogelio
2011-08-05
A protocol for stereochemical analysis, based on the systematic comparison between theoretical and experimental vicinal (1)H-(1)H NMR coupling constants, was developed and applied to a series of flexible compounds (1-8) derived from the 6-heptenyl-5,6-dihydro-2H-pyran-2-one framework. The method included a broad conformational search, followed by geometry optimization at the DFT B3LYP/DGDZVP level, calculation of the vibrational frequencies, thermochemical parameters, magnetic shielding tensors, and the total NMR spin-spin coupling constants. Three scaling factors, depending on the carbon atom hybridizations, were found for the (1)H-C-C-(1)H vicinal coupling constants: f((sp3)-(sp3)) = 0.910, f((sp3)-(sp2)) = 0.929, and f((sp2)-(sp2))= 0.977. A remarkable correlation between the theoretical (J(pre)) and experimental (1)H-(1)H NMR (J(exp)) coupling constants for spicigerolide (1), a cytotoxic natural product, and some of its synthetic stereoisomers (2-4) demonstrated the predictive value of this approach for the stereochemical assignment of highly flexible compounds containing multiple chiral centers. The stereochemistry of two natural 6-heptenyl-5,6-dihydro-2H-pyran-2-ones (14 and 15) containing diverse functional groups in the heptenyl side chain was also analyzed by application of this combined theoretical and experimental approach, confirming its reliability. Additionally, a geometrical analysis for the conformations of 1-8 revealed that weak hydrogen bonds substantially guide the conformational behavior of the tetraacyloxy-6-heptenyl-2H-pyran-2-ones.
Reliability and validity of risk analysis
Aven, Terje; Heide, Bjornar
2009-01-01
In this paper we investigate to what extent risk analysis meets the scientific quality requirements of reliability and validity. We distinguish between two types of approaches within risk analysis, relative frequency-based approaches and Bayesian approaches. The former category includes both traditional statistical inference methods and the so-called probability of frequency approach. Depending on the risk analysis approach, the aim of the analysis is different, the results are presented in different ways and consequently the meaning of the concepts reliability and validity are not the same.
Multi-Disciplinary System Reliability Analysis
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Reliability analysis techniques for the design engineer
Corran, E.R.; Witt, H.H.
1982-01-01
This paper describes a fault tree analysis package that eliminates most of the housekeeping tasks involved in proceeding from the initial construction of a fault tree to the final stage of presenting a reliability analysis in a safety report. It is suitable for designers with relatively little training in reliability analysis and computer operation. Users can rapidly investigate the reliability implications of various options at the design stage and evolve a system which meets specified reliability objectives. Later independent review is thus unlikely to reveal major shortcomings necessitating modification and project delays. The package operates interactively, allowing the user to concentrate on the creative task of developing the system fault tree, which may be modified and displayed graphically. For preliminary analysis, system data can be derived automatically from a generic data bank. As the analysis proceeds, improved estimates of critical failure rates and test and maintenance schedules can be inserted. The technique is applied to the reliability analysis of the recently upgraded HIFAR Containment Isolation System. (author)
The Reliability of a Functional Agility Test for Water Polo
Tucher Guilherme
2014-07-01
Full Text Available Few functional agility tests for water polo take into consideration its specific characteristics. The preliminary objective of this study was to evaluate the reliability of an agility test for water polo players. Fifteen players (16.3 ± 1.8 years old with a minimum of two years of competitive experience were evaluated. A Functional Test for Agility Performance (FTAP was designed to represent the context of this sport. Several trials were performed to familiarize the athlete with the movement. Two experienced coaches measured three repetitions of the FTAP. Descriptive statistics, repeated measures analysis of variance (ANOVA, 95% limit of agreement (LOA, intraclass correlation coefficient (ICC and standard error of measurements (SEM were used for data analysis. It was considered that certain criteria of reliability measures were met. There was no significant difference between the repetitions, which may be explained by an effect of the evaluator, the ability of the players or fatigue (p > 0.05. The ICC average from evaluators was high (0.88. The SEM varied between 0.13 s and 0.49 s. The CV average considering each individual was near 6-7%. These values depended on the condition of measurement. As the FTAP contains some characteristics that create a degree of unpredictability, the same athlete may reach different performance results, increasing variability. An adjustment in the sample, familiarization and careful selection of subjects help to improve this situation and enhance the reliability of the indicators.
Benedikt Sundermann
2017-06-01
Full Text Available BackgroundThe approach to apply multivariate pattern analyses based on neuro imaging data for outcome prediction holds out the prospect to improve therapeutic decisions in mental disorders. Patients suffering from panic disorder with agoraphobia (PD/AG often exhibit an increased perception of bodily sensations. The purpose of this investigation was to assess whether multivariate classification applied to a functional magnetic resonance imaging (fMRI interoception paradigm can predict individual responses to cognitive behavioral therapy (CBT in PD/AG.MethodsThis analysis is based on pretreatment fMRI data during an interoceptive challenge from a multicenter trial of the German PANIC-NET. Patients with DSM-IV PD/AG were dichotomized as responders (n = 30 or non-responders (n = 29 based on the primary outcome (Hamilton Anxiety Scale Reduction ≥50% after 6 weeks of CBT (2 h/week. fMRI parametric maps were used as features for response classification with linear support vector machines (SVM with or without automated feature selection. Predictive accuracies were assessed using cross validation and permutation testing. The influence of methodological parameters and the predictive ability for specific interoception-related symptom reduction were further evaluated.ResultsSVM did not reach sufficient overall predictive accuracies (38.0–54.2% for anxiety reduction in the primary outcome. In the exploratory analyses, better accuracies (66.7% were achieved for predicting interoception-specific symptom relief as an alternative outcome domain. Subtle information regarding this alternative response criterion but not the primary outcome was revealed by post hoc univariate comparisons.ConclusionIn contrast to reports on other neurofunctional probes, SVM based on an interoception paradigm was not able to reliably predict individual response to CBT. Results speak against the clinical applicability of this technique.
Reliability analysis of Angra I safety systems
Oliveira, L.F.S. de; Soto, J.B.; Maciel, C.C.; Gibelli, S.M.O.; Fleming, P.V.; Arrieta, L.A.
1980-07-01
An extensive reliability analysis of some safety systems of Angra I, are presented. The fault tree technique, which has been successfully used in most reliability studies of nuclear safety systems performed to date is employed. Results of a quantitative determination of the unvailability of the accumulator and the containment spray injection systems are presented. These results are also compared to those reported in WASH-1400. (E.G.) [pt
Swimming pool reactor reliability and safety analysis
Li Zhaohuan
1997-01-01
A reliability and safety analysis of Swimming Pool Reactor in China Institute of Atomic Energy is done by use of event/fault tree technique. The paper briefly describes the analysis model, analysis code and main results. Meanwhile it also describes the impact of unassigned operation status on safety, the estimation of effectiveness of defense tactics in maintenance against common cause failure, the effectiveness of recovering actions on the system reliability, the comparison of occurrence frequencies of the core damage by use of generic and specific data
Lee, Sang Yong
1992-07-01
This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.
Reliability analysis techniques for the design engineer
Corran, E.R.; Witt, H.H.
1980-01-01
A fault tree analysis package is described that eliminates most of the housekeeping tasks involved in proceeding from the initial construction of a fault tree to the final stage of presenting a reliability analysis in a safety report. It is suitable for designers with relatively little training in reliability analysis and computer operation. Users can rapidly investigate the reliability implications of various options at the design stage, and evolve a system which meets specified reliability objectives. Later independent review is thus unlikely to reveal major shortcomings necessitating modification and projects delays. The package operates interactively allowing the user to concentrate on the creative task of developing the system fault tree, which may be modified and displayed graphically. For preliminary analysis system data can be derived automatically from a generic data bank. As the analysis procedes improved estimates of critical failure rates and test and maintenance schedules can be inserted. The computations are standard, - identification of minimal cut-sets, estimation of reliability parameters, and ranking of the effect of the individual component failure modes and system failure modes on these parameters. The user can vary the fault trees and data on-line, and print selected data for preferred systems in a form suitable for inclusion in safety reports. A case history is given - that of HIFAR containment isolation system. (author)
Reliability analysis of an offshore structure
Sorensen, J. D.; Faber, M. H.; Thoft-Christensen, P.
1992-01-01
A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittle fracture and crack through the tubular member walls. The stochastic modelling is described. The hot spot stress spectral moments as function of the stochasti...
Reliability Analysis of Elasto-Plastic Structures
Thoft-Christensen, Palle; Sørensen, John Dalsgaard
1984-01-01
. Failure of this type of system is defined either as formation of a mechanism or by failure of a prescribed number of elements. In the first case failure is independent of the order in which the elements fail, but this is not so by the second definition. The reliability analysis consists of two parts...... are described and the two definitions of failure can be used by the first formulation, but only the failure definition based on formation of a mechanism by the second formulation. The second part of the reliability analysis is an estimate of the failure probability for the structure on the basis...
Reliability analysis and assessment of structural systems
Yao, J.T.P.; Anderson, C.A.
1977-01-01
The study of structural reliability deals with the probability of having satisfactory performance of the structure under consideration within any specific time period. To pursue this study, it is necessary to apply available knowledge and methodology in structural analysis (including dynamics) and design, behavior of materials and structures, experimental mechanics, and the theory of probability and statistics. In addition, various severe loading phenomena such as strong motion earthquakes and wind storms are important considerations. For three decades now, much work has been done on reliability analysis of structures, and during this past decade, certain so-called 'Level I' reliability-based design codes have been proposed and are in various stages of implementation. These contributions will be critically reviewed and summarized in this paper. Because of the undesirable consequences resulting from the failure of nuclear structures, it is important and desirable to consider the structural reliability in the analysis and design of these structures. Moreover, after these nuclear structures are constructed, it is desirable for engineers to be able to assess the structural reliability periodically as well as immediately following the occurrence of severe loading conditions such as a strong-motion earthquake. During this past decade, increasing use has been made of techniques of system identification in structural engineering. On the basis of non-destructive test results, various methods have been developed to obtain an adequate mathematical model (such as the equations of motion with more realistic parameters) to represent the structural system
Culture Representation in Human Reliability Analysis
David Gertman; Julie Marble; Steven Novack
2006-12-01
Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991) cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.
Meng, Qiong; Yang, Zheng; Wu, Yang; Xiao, Yuanyuan; Gu, Xuezhong; Zhang, Meixia; Wan, Chonghua; Li, Xiaosong
2017-05-04
The Functional Assessment of Cancer Therapy-Leukemia (FACT-Leu) scale, a leukemia-specific instrument for determining the health-related quality of life (HRQOL) in patients with leukemia, had been developed and validated, but there have been no reports on the development of a simplified Chinese version of this scale. This is a new exploration to analyze the reliability of the HRQOL measurement using multivariate generalizability theory (MGT). This study aimed to develop a Chinese version of the FACT-Leu scale and evaluate its reliability using MGT to provide evidence to support the revision and improvement of this scale. The Chinese version of the FACT-Leu scale was developed by four steps: forward translation, backward translation, cultural adaptation and pilot-testing. The HRQOL was measured for eligible inpatients with leukemia using this scale to provide data. A single-facet multivariate Generalizability Study (G-study) design was demonstrated to estimate the variance-covariance components and then several Decision Studies (D-studies) with varying numbers of items were analyzed to obtain reliability coefficients and to understand how much the measurement reliability could be vary as the number of items in MGT changes. One-hundred and one eligible inpatients diagnosed with leukemia were recruited and completed the HRQOL measurement at the time of admission to the hospital. In the G-study, the variation component of the patient-item interaction was largest while the variation component of the item was the smallest for the four of five domains, except for the leukemia-specific (LEUS) domain. In the D-study, at the level of domain, the generalizability coefficients (G) and the indexes of dependability (Ф) for four of the five domains were approximately equal to or greater than 0.80 except for the Emotional Well-being (EWB) domain (>0.70 but number of items were obtained: one is a 37-item version while the other is a 45-item version. The Chinese version of the FACT
Representative Sampling for reliable data analysis
Petersen, Lars; Esbensen, Kim Harry
2005-01-01
regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Michael A. Guthrie
2013-01-01
Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.
Reliability analysis of the reactor protection system with fault diagnosis
Lee, D.Y.; Han, J.B.; Lyou, J.
2004-01-01
The main function of a reactor protection system (RPS) is to maintain the reactor core integrity and reactor coolant system pressure boundary. The RPS consists of the 2-out-of-m redundant architecture to assure a reliable operation. The system reliability of the RPS is a very important factor for the probability safety assessment (PSA) evaluation in the nuclear field. To evaluate the system failure rate of the k-out-of-m redundant system is not so easy with the deterministic method. In this paper, the reliability analysis method using the binomial process is suggested to calculate the failure rate of the RPS system with a fault diagnosis function. The suggested method is compared with the result of the Markov process to verify the validation of the suggested method, and applied to the several kinds of RPS architectures for a comparative evaluation of the reliability. (orig.)
Reliability analysis of prestressed concrete containment structures
Jiang, J.; Zhao, Y.; Sun, J.
1993-01-01
The reliability analysis of prestressed concrete containment structures subjected to combinations of static and dynamic loads with consideration of uncertainties of structural and load parameters is presented. Limit state probabilities for given parameters are calculated using the procedure developed at BNL, while that with consideration of parameter uncertainties are calculated by a fast integration for time variant structural reliability. The limit state surface of the prestressed concrete containment is constructed directly incorporating the prestress. The sensitivities of the Choleskey decomposition matrix and the natural vibration character are calculated by simplified procedures. (author)
Prime implicants in dynamic reliability analysis
Tyrväinen, Tero
2016-01-01
This paper develops an improved definition of a prime implicant for the needs of dynamic reliability analysis. Reliability analyses often aim to identify minimal cut sets or prime implicants, which are minimal conditions that cause an undesired top event, such as a system's failure. Dynamic reliability analysis methods take the time-dependent behaviour of a system into account. This means that the state of a component can change in the analysed time frame and prime implicants can include the failure of a component at different time points. There can also be dynamic constraints on a component's behaviour. For example, a component can be non-repairable in the given time frame. If a non-repairable component needs to be failed at a certain time point to cause the top event, we consider that the condition that it is failed at the latest possible time point is minimal, and the condition in which it fails earlier non-minimal. The traditional definition of a prime implicant does not account for this type of time-related minimality. In this paper, a new definition is introduced and illustrated using a dynamic flowgraph methodology model. - Highlights: • A new definition of a prime implicant is developed for dynamic reliability analysis. • The new definition takes time-related minimality into account. • The new definition is needed in dynamic flowgraph methodology. • Results can be represented by a smaller number of prime implicants.
Reliability and risk analysis methods research plan
1984-10-01
This document presents a plan for reliability and risk analysis methods research to be performed mainly by the Reactor Risk Branch (RRB), Division of Risk Analysis and Operations (DRAO), Office of Nuclear Regulatory Research. It includes those activities of other DRAO branches which are very closely related to those of the RRB. Related or interfacing programs of other divisions, offices and organizations are merely indicated. The primary use of this document is envisioned as an NRC working document, covering about a 3-year period, to foster better coordination in reliability and risk analysis methods development between the offices of Nuclear Regulatory Research and Nuclear Reactor Regulation. It will also serve as an information source for contractors and others to more clearly understand the objectives, needs, programmatic activities and interfaces together with the overall logical structure of the program
Human reliability analysis of control room operators
Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)
2005-07-01
Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)
Reliability analysis and initial requirements for FC systems and stacks
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
Sensitivity analysis in a structural reliability context
Lemaitre, Paul
2014-01-01
This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)
Human reliability analysis using event trees
Heslinga, G.
1983-01-01
The shut-down procedure of a technologically complex installation as a nuclear power plant consists of a lot of human actions, some of which have to be performed several times. The procedure is regarded as a chain of modules of specific actions, some of which are analyzed separately. The analysis is carried out by making a Human Reliability Analysis event tree (HRA event tree) of each action, breaking down each action into small elementary steps. The application of event trees in human reliability analysis implies more difficulties than in the case of technical systems where event trees were mainly used until now. The most important reason is that the operator is able to recover a wrong performance; memory influences play a significant role. In this study these difficulties are dealt with theoretically. The following conclusions can be drawn: (1) in principle event trees may be used in human reliability analysis; (2) although in practice the operator will recover his fault partly, theoretically this can be described as starting the whole event tree again; (3) compact formulas have been derived, by which the probability of reaching a specific failure consequence on passing through the HRA event tree after several times of recovery is to be calculated. (orig.)
Analysis of the Reliability of the "Alternator- Alternator Belt" System
Ivan Mavrin
2012-10-01
Full Text Available Before starting and also during the exploitation of va1ioussystems, it is vety imp011ant to know how the system and itsparts will behave during operation regarding breakdowns, i.e.failures. It is possible to predict the service behaviour of a systemby determining the functions of reliability, as well as frequencyand intensity of failures.The paper considers the theoretical basics of the functionsof reliability, frequency and intensity of failures for the twomain approaches. One includes 6 equal intetvals and the other13 unequal intetvals for the concrete case taken from practice.The reliability of the "alternator- alternator belt" system installedin the buses, has been analysed, according to the empiricaldata on failures.The empitical data on failures provide empirical functionsof reliability and frequency and intensity of failures, that arepresented in tables and graphically. The first analysis perfO!med by dividing the mean time between failures into 6 equaltime intervals has given the forms of empirical functions of fa ilurefrequency and intensity that approximately cotTespond totypical functions. By dividing the failure phase into 13 unequalintetvals with two failures in each interval, these functions indicateexplicit transitions from early failure inte1val into the randomfailure interval, i.e. into the ageing intetval. Functions thusobtained are more accurate and represent a better solution forthe given case.In order to estimate reliability of these systems with greateraccuracy, a greater number of failures needs to be analysed.
A G-function-based reliability-based design methodology applied to a cam roller system
Wang, W.; Sui, P.; Wu, Y.T.
1996-01-01
Conventional reliability-based design optimization methods treats the reliability function as an ordinary function and applies existing mathematical programming techniques to solve the design problem. As a result, the conventional approach requires nested loops with respect to g-function, and is very time consuming. A new reliability-based design method is proposed in this paper that deals with the g-function directly instead of the reliability function. This approach has the potential of significantly reducing the number of calls for g-function calculations since it requires only one full reliability analysis in a design iteration. A cam roller system in a typical high pressure fuel injection diesel engine is designed using both the proposed and the conventional approach. The proposed method is much more efficient for this application
Structural reliability analysis and seismic risk assessment
Hwang, H.; Reich, M.; Shinozuka, M.
1984-01-01
This paper presents a reliability analysis method for safety evaluation of nuclear structures. By utilizing this method, it is possible to estimate the limit state probability in the lifetime of structures and to generate analytically the fragility curves for PRA studies. The earthquake ground acceleration, in this approach, is represented by a segment of stationary Gaussian process with a zero mean and a Kanai-Tajimi Spectrum. All possible seismic hazard at a site represented by a hazard curve is also taken into consideration. Furthermore, the limit state of a structure is analytically defined and the corresponding limit state surface is then established. Finally, the fragility curve is generated and the limit state probability is evaluated. In this paper, using a realistic reinforced concrete containment as an example, results of the reliability analysis of the containment subjected to dead load, live load and ground earthquake acceleration are presented and a fragility curve for PRA studies is also constructed
Guyomarc'h, Pierre; Bruzek, Jaroslav
2011-05-20
Identification in forensic anthropology and the definition of a biological profile in bioarchaeology are essential to each of those fields and use the same methodologies. Sex, age, stature and ancestry can be conclusive or dispensable, depending on the field. The Fordisc(®) 3.0 computer program was developed to aid in the identification of the sex, stature and ancestry of skeletal remains by exploiting the Forensic Data Bank (FDB) and computing discriminant function analyses (DFAs). Although widely used, this tool has been recently criticised, principally when used to determine ancestry. Two sub-samples of individuals of known sex were drawn from French (n=50) and Thai (n=91) osteological collections and used to assess the reliability of sex determination using Fordisc(®) 3.0 with 12 cranial measurements. Comparisons were made using the whole FDB as well as using select groups, taking into account the posterior and typicality probabilities. The results of Fordisc(®) 3.0 vary between 52.2% and 77.8% depending on the options and groups selected. Tests of published discriminant functions and the computation of specific DFA were performed in order to discuss the applicability of this software and, overall, to question the pertinence of the use of DFA and linear distances in sex determination, in light of the huge cranial morphological variability. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Structural reliability analysis based on the cokriging technique
Zhao Wei; Wang Wei; Dai Hongzhe; Xue Guofeng
2010-01-01
Approximation methods are widely used in structural reliability analysis because they are simple to create and provide explicit functional relationships between the responses and variables in stead of the implicit limit state function. Recently, the kriging method which is a semi-parameter interpolation technique that can be used for deterministic optimization and structural reliability has gained popularity. However, to fully exploit the kriging method, especially in high-dimensional problems, a large number of sample points should be generated to fill the design space and this can be very expensive and even impractical in practical engineering analysis. Therefore, in this paper, a new method-the cokriging method, which is an extension of kriging, is proposed to calculate the structural reliability. cokriging approximation incorporates secondary information such as the values of the gradients of the function being approximated. This paper explores the use of the cokriging method for structural reliability problems by comparing it with the Kriging method based on some numerical examples. The results indicate that the cokriging procedure described in this work can generate approximation models to improve on the accuracy and efficiency for structural reliability problems and is a viable alternative to the kriging.
DATMAN: A reliability data analysis program using Bayesian updating
Becker, M.; Feltus, M.A.
1996-01-01
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, which can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately
Qualitative analysis in reliability and safety studies
Worrell, R.B.; Burdick, G.R.
1976-01-01
The qualitative evaluation of system logic models is described as it pertains to assessing the reliability and safety characteristics of nuclear systems. Qualitative analysis of system logic models, i.e., models couched in an event (Boolean) algebra, is defined, and the advantages inherent in qualitative analysis are explained. Certain qualitative procedures that were developed as a part of fault-tree analysis are presented for illustration. Five fault-tree analysis computer-programs that contain a qualitative procedure for determining minimal cut sets are surveyed. For each program the minimal cut-set algorithm and limitations on its use are described. The recently developed common-cause analysis for studying the effect of common-causes of failure on system behavior is explained. This qualitative procedure does not require altering the fault tree, but does use minimal cut sets from the fault tree as part of its input. The method is applied using two different computer programs. 25 refs
The quantitative failure of human reliability analysis
Bennett, C.T.
1995-07-01
This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.
Infusing Reliability Techniques into Software Safety Analysis
Shi, Ying
2015-01-01
Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.
The reliability and validity of a sexual functioning questionnaire.
Corty, E W; Althof, S E; Kurit, D M
1996-01-01
The present study assessed the reliability and validity of a measure of sexual functioning, the CMSH-SFQ, for male patients and their partners. The CMSH-SFQ measures erectile and orgasmic functioning, sexual drive, frequency of sexual behavior, and sexual satisfaction. Test-retest reliability was assessed with 19 males and 19 females for the baseline CMSH-SFQ. Criterion validity was measured by comparing the answers of 25 male patients to those of their partners at baseline and follow-up. The majority of items had acceptable levels of reliability and validity. The CMSH-SFQ provides a reliable and valid device that can be used to measure global sexual functioning in men and their partners and may be used to evaluate the efficacy of treatments for sexual dysfunctions. Limitations and suggestions for use of the CMSH-SFQ are addressed.
Reliability analysis of pipe whip impacts
Alzbutas, R.; Dundulis, G.; Kulak, R.F.; Marchertas, P.V.
2003-01-01
A probabilistic analysis of a group distribution header (GDH) guillotine break and the damage resulting from the failed GDH impacting against a neighbouring wall was carried out for the Ignalita RBMK-1500 reactor. The NEPTUNE software system was used for the deterministic transient analysis of a GDH guillotine break. Many deterministic analyses were performed using different values of the random variables that were specified by ProFES software. All the deterministic results were transferred to the ProFES system, which then performed probabilistic analyses of piping failure and wall damage. The Monte Carlo Simulation (MCS) method was used to study the sensitivity of the response variables and the effect of uncertainties of material properties and geometry parameters to the probability of limit states. The First Order Reliability Method (FORM) was used to study the probability of failure of the impacted-wall and the support-wall. The Response Surface (RS/MCS) method was used in order to express failure probability as function and to investigate the dependence between impact load and failure probability. The results of the probability analyses for a whipping GDH impacting onto an adjacent wall show that: (i) there is a 0.982 probability that after a GDH guillotine break contact between GDH and wall will occur; (ii) there is a probability of 0.013 that the ultimate tensile strength of concrete at the impact location will be reached, and a through-crack may open; (iii) there is a probability of 0.0126 that the ultimate compressive strength of concrete at the GDH support location will be reached, and the concrete may fail; (iv) at the impact location in the adjacent wall, there is a probability of 0.327 that the ultimate tensile strength of the rebars in the first layer will be reached and the rebars will fail; (v) at the GDH support location, there is a probability of 0.11 that the ultimate stress of the rebars in the first layer will be reached and the rebars will fail
Subset simulation for structural reliability sensitivity analysis
Song Shufang; Lu Zhenzhou; Qiao Hongwei
2009-01-01
Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes
Recent advances in computational structural reliability analysis methods
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Reliability Correction for Functional Connectivity: Theory and Implementation
Mueller, Sophia; Wang, Danhong; Fox, Michael D.; Pan, Ruiqi; Lu, Jie; Li, Kuncheng; Sun, Wei; Buckner, Randy L.; Liu, Hesheng
2016-01-01
Network properties can be estimated using functional connectivity MRI (fcMRI). However, regional variation of the fMRI signal causes systematic biases in network estimates including correlation attenuation in regions of low measurement reliability. Here we computed the spatial distribution of fcMRI reliability using longitudinal fcMRI datasets and demonstrated how pre-estimated reliability maps can correct for correlation attenuation. As a test case of reliability-based attenuation correction we estimated properties of the default network, where reliability was significantly lower than average in the medial temporal lobe and higher in the posterior medial cortex, heterogeneity that impacts estimation of the network. Accounting for this bias using attenuation correction revealed that the medial temporal lobe’s contribution to the default network is typically underestimated. To render this approach useful to a greater number of datasets, we demonstrate that test-retest reliability maps derived from repeated runs within a single scanning session can be used as a surrogate for multi-session reliability mapping. Using data segments with different scan lengths between 1 and 30 min, we found that test-retest reliability of connectivity estimates increases with scan length while the spatial distribution of reliability is relatively stable even at short scan lengths. Finally, analyses of tertiary data revealed that reliability distribution is influenced by age, neuropsychiatric status and scanner type, suggesting that reliability correction may be especially important when studying between-group differences. Collectively, these results illustrate that reliability-based attenuation correction is an easily implemented strategy that mitigates certain features of fMRI signal nonuniformity. PMID:26493163
Reliability Analysis of Free Jet Scour Below Dams
Chuanqi Li
2012-12-01
Full Text Available Current formulas for calculating scour depth below of a free over fall are mostly deterministic in nature and do not adequately consider the uncertainties of various scouring parameters. A reliability-based assessment of scour, taking into account uncertainties of parameters and coefficients involved, should be performed. This paper studies the reliability of a dam foundation under the threat of scour. A model for calculating the reliability of scour and estimating the probability of failure of the dam foundation subjected to scour is presented. The Maximum Entropy Method is applied to construct the probability density function (PDF of the performance function subject to the moment constraints. Monte Carlo simulation (MCS is applied for uncertainty analysis. An example is considered, and there liability of its scour is computed, the influence of various random variables on the probability failure is analyzed.
A taxonomy for human reliability analysis
Beattie, J.D.; Iwasa-Madge, K.M.
1984-01-01
A human interaction taxonomy (classification scheme) was developed to facilitate human reliability analysis in a probabilistic safety evaluation of a nuclear power plant, being performed at Ontario Hydro. A human interaction occurs, by definition, when operators or maintainers manipulate, or respond to indication from, a plant component or system. The taxonomy aids the fault tree analyst by acting as a heuristic device. It helps define the range and type of human errors to be identified in the construction of fault trees, while keeping the identification by different analysts consistent. It decreases the workload associated with preliminary quantification of the large number of identified interactions by including a category called 'simple interactions'. Fault tree analysts quantify these according to a procedure developed by a team of human reliability specialists. The interactions which do not fit into this category are called 'complex' and are quantified by the human reliability team. The taxonomy is currently being used in fault tree construction in a probabilistic safety evaluation. As far as can be determined at this early stage, the potential benefits of consistency and completeness in identifying human interactions and streamlining the initial quantification are being realized
System reliability analysis with natural language and expert's subjectivity
Onisawa, T.
1996-01-01
This paper introduces natural language expressions and expert's subjectivity to system reliability analysis. To this end, this paper defines a subjective measure of reliability and presents the method of the system reliability analysis using the measure. The subjective measure of reliability corresponds to natural language expressions of reliability estimation, which is represented by a fuzzy set defined on [0,1]. The presented method deals with the dependence among subsystems and employs parametrized operations of subjective measures of reliability which can reflect expert 's subjectivity towards the analyzed system. The analysis results are also expressed by linguistic terms. Finally this paper gives an example of the system reliability analysis by the presented method
Reliability analysis of containment isolation systems
Pelto, P.J.; Ames, K.R.; Gallucci, R.H.
1985-06-01
This report summarizes the results of the Reliability Analysis of Containment Isolation System Project. Work was performed in five basic areas: design review, operating experience review, related research review, generic analysis and plant specific analysis. Licensee Event Reports (LERs) and Integrated Leak Rate Test (ILRT) reports provided the major sources of containment performance information used in this study. Data extracted from LERs were assembled into a computer data base. Qualitative and quantitative information developed for containment performance under normal operating conditions and design basis accidents indicate that there is room for improvement. A rough estimate of overall containment unavailability for relatively small leaks which violate plant technical specifications is 0.3. An estimate of containment unavailability due to large leakage events is in the range of 0.001 to 0.01. These estimates are dependent on several assumptions (particularly on event duration times) which are documented in the report
N A Kovyazina; N A Alhutova; N N Zybina; N M Kalinina
2014-01-01
The goal of the study was to demonstrate the multilevel laboratory quality management system and point at the methods of estimating the reliability and increasing the amount of information content of the laboratory results (on the example of the laboratory case). Results. The article examines the stages of laboratory quality management which has helped to estimate the reliability of the results of determining Free T3, Free T4 and TSH. The measurement results are presented by the expanded unce...
Advancing Usability Evaluation through Human Reliability Analysis
Ronald L. Boring; David I. Gertman
2005-01-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues
Diakoptical reliability analysis of transistorized systems
Kontoleon, J.M.; Lynn, J.W.; Green, A.E.
1975-01-01
Limitations both on high-speed core availability and computation time required for assessing the reliability of large-sized and complex electronic systems, such as used for the protection of nuclear reactors, are very serious restrictions which continuously confront the reliability analyst. Diakoptic methods simplify the solution of the electrical-network problem by subdividing a given network into a number of independent subnetworks and then interconnecting the solutions of these smaller parts by a systematic process involving transformations based on connection-matrix elements associated with the interconnecting links. However, the interconnection process is very complicated and it may be used only if the original system has been cut in such a manner that a relation can be established between the constraints appearing at both sides of the cut. Also, in dealing with transistorized systems, one of the difficulties encountered is that of modelling adequately their performance under various operating conditions, since their parameters are strongly affected by the imposed voltage and current levels. In this paper a new interconnection approach is presented which may be of use in the reliability analysis of large-sized transistorized systems. This is based on the partial optimization of the subdivisions of the torn network as well as on the optimization of the torn paths. The solution of the subdivisions is based on the principles of algebraic topology, with an algebraic structure relating the physical variables in a topological structure which defines the interconnection of the discrete elements. Transistors, and other nonlinear devices, are modelled using their actual characteristics, under normal and abnormal operating conditions. Use of so-called k factors is made to facilitate accounting for use of electrical stresses. The approach is demonstrated by way of an example. (author)
A note on reliability estimation of functionally diverse systems
Littlewood, B.; Popov, P.; Strigini, L.
1999-01-01
It has been argued that functional diversity might be a plausible means of claiming independence of failures between two versions of a system. We present a model of functional diversity, in the spirit of earlier models of diversity such as those of Eckhardt and Lee, and Hughes. In terms of the model, we show that the claims for independence between functionally diverse systems seem rather unrealistic. Instead, it seems likely that functionally diverse systems will exhibit positively correlated failures, and thus will be less reliable than an assumption of independence would suggest. The result does not, of course, suggest that functional diversity is not worthwhile; instead, it places upon the evaluator of such a system the onus to estimate the degree of dependence so as to evaluate the reliability of the system
Reliability Analysis Techniques for Communication Networks in Nuclear Power Plant
Lim, T. J.; Jang, S. C.; Kang, H. G.; Kim, M. C.; Eom, H. S.; Lee, H. J.
2006-09-01
The objectives of this project is to investigate and study existing reliability analysis techniques for communication networks in order to develop reliability analysis models for nuclear power plant's safety-critical networks. It is necessary to make a comprehensive survey of current methodologies for communication network reliability. Major outputs of this study are design characteristics of safety-critical communication networks, efficient algorithms for quantifying reliability of communication networks, and preliminary models for assessing reliability of safety-critical communication networks
N A Kovyazina
2014-06-01
Full Text Available The goal of the study was to demonstrate the multilevel laboratory quality management system and point at the methods of estimating the reliability and increasing the amount of information content of the laboratory results (on the example of the laboratory case. Results. The article examines the stages of laboratory quality management which has helped to estimate the reliability of the results of determining Free T3, Free T4 and TSH. The measurement results are presented by the expanded uncertainty and the evaluation of the dynamics. Conclusion. Compliance with mandatory measures for laboratory quality management system enables laboratories to obtain reliable results and calculate the parameters that are able to increase the amount of information content of laboratory tests in clinical decision making.
Ueda, Y. [Osaka University, Osaka (Japan). Welding Research Institute; Masaoka, K.; Okada, H. [University of Osaka Prefecture, Osaka (Japan). Faculty of Engineering
1996-12-31
A reliability analysis was performed on ultimate strength of a hull by introducing reliability engineerings into the idealized structural unit method. Elements developed under the present study were applied to a model of an actual structure to indicate that even an analysis requiring much time under the finite element method can be performed in a short time and at high accuracy when this method is used. Analysis acted with bending moment and shear force simultaneously was performed on a model used as a structure in experiments carried out by Nishihara, assuming pure bending moment and longitudinal strength during slamming. Then, a reliability analysis was conducted on the same model based on this analysis method to investigate the ultimate strength. In an analysis of an ultimate strength when bending and shearing that assume slamming act upon simultaneously, axial force in the hull side decreases as loading increases, wherein how the shearing force increases can be identified clearly. Although existence of initial bends reduces the strength, the effect of variance in the vicinity of the average value on the reliability is rather small, while the effect due to variance in yield stress is greater. 27 refs., 14 figs., 4 tabs.
Ueda, Y [Osaka University, Osaka (Japan). Welding Research Institute; Masaoka, K; Okada, H [University of Osaka Prefecture, Osaka (Japan). Faculty of Engineering
1997-12-31
A reliability analysis was performed on ultimate strength of a hull by introducing reliability engineerings into the idealized structural unit method. Elements developed under the present study were applied to a model of an actual structure to indicate that even an analysis requiring much time under the finite element method can be performed in a short time and at high accuracy when this method is used. Analysis acted with bending moment and shear force simultaneously was performed on a model used as a structure in experiments carried out by Nishihara, assuming pure bending moment and longitudinal strength during slamming. Then, a reliability analysis was conducted on the same model based on this analysis method to investigate the ultimate strength. In an analysis of an ultimate strength when bending and shearing that assume slamming act upon simultaneously, axial force in the hull side decreases as loading increases, wherein how the shearing force increases can be identified clearly. Although existence of initial bends reduces the strength, the effect of variance in the vicinity of the average value on the reliability is rather small, while the effect due to variance in yield stress is greater. 27 refs., 14 figs., 4 tabs.
Reliability analysis for new technology-based transmitters
Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Charpentier, Dominique [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France)
2011-02-15
The reliability analysis of new technology-based transmitters has to deal with specific issues: various interactions between both material elements and functions, undefined behaviours under faulty conditions, several transmitted data, and little reliability feedback. To handle these particularities, a '3-step' model is proposed, based on goal tree-success tree (GTST) approaches to represent both the functional and material aspects, and includes the faults and failures as a third part for supporting reliability analyses. The behavioural aspects are provided by relationship matrices, also denoted master logic diagrams (MLD), with stochastic values which represent direct relationships between system elements. Relationship analyses are then proposed to assess the effect of any fault or failure on any material element or function. Taking these relationships into account, the probabilities of malfunction and failure modes are evaluated according to time. Furthermore, uncertainty analyses tend to show that even if the input data and system behaviour are not well known, these previous results can be obtained in a relatively precise way. An illustration is provided by a case study on an infrared gas transmitter. These properties make the proposed model and corresponding reliability analyses especially suitable for intelligent transmitters (or 'smart sensors').
Structural Reliability Analysis of Wind Turbines: A Review
Zhiyu Jiang
2017-12-01
Full Text Available The paper presents a detailed review of the state-of-the-art research activities on structural reliability analysis of wind turbines between the 1990s and 2017. We describe the reliability methods including the first- and second-order reliability methods and the simulation reliability methods and show the procedure for and application areas of structural reliability analysis of wind turbines. Further, we critically review the various structural reliability studies on rotor blades, bottom-fixed support structures, floating systems and mechanical and electrical components. Finally, future applications of structural reliability methods to wind turbine designs are discussed.
Research review and development trends of human reliability analysis techniques
Li Pengcheng; Chen Guohua; Zhang Li; Dai Licao
2011-01-01
Human reliability analysis (HRA) methods are reviewed. The theoretical basis of human reliability analysis, human error mechanism, the key elements of HRA methods as well as the existing HRA methods are respectively introduced and assessed. Their shortcomings,the current research hotspot and difficult problems are identified. Finally, it takes a close look at the trends of human reliability analysis methods. (authors)
Reliability Analysis of Tubular Joints in Offshore Structures
Thoft-Christensen, Palle; Sørensen, John Dalsgaard
1987-01-01
Reliability analysis of single tubular joints and offshore platforms with tubular joints is" presented. The failure modes considered are yielding, punching, buckling and fatigue failure. Element reliability as well as systems reliability approaches are used and illustrated by several examples....... Finally, optimal design of tubular.joints with reliability constraints is discussed and illustrated by an example....
Reliability analysis of service water system under earthquake
Yu Yu; Qian Xiaoming; Lu Xuefeng; Wang Shengfei; Niu Fenglei
2013-01-01
Service water system is one of the important safety systems in nuclear power plant, whose failure probability is always gained by system reliability analysis. The probability of equipment failure under the earthquake is the function of the peak acceleration of earthquake motion, while the occurrence of earthquake is of randomicity, thus the traditional fault tree method in current probability safety assessment is not powerful enough to deal with such case of conditional probability problem. An analysis frame was put forward for system reliability evaluation in seismic condition in this paper, in which Monte Carlo simulation was used to deal with conditional probability problem. Annual failure probability of service water system was calculated, and failure probability of 1.46X10 -4 per year was obtained. The analysis result is in accordance with the data which indicate equipment seismic resistance capability, and the rationality of the model is validated. (authors)
Human Reliability Analysis For Computerized Procedures
Boring, Ronald L.; Gertman, David I.; Le Blanc, Katya
2011-01-01
This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.
Reliability Analysis of Structural Timber Systems
Sørensen, John Dalsgaard; Hoffmeyer, P.
2000-01-01
Structural systems like timber trussed rafters and roof elements made of timber can be expected to have some degree of redundancy and nonlinear/plastic behaviour when the loading consists of for example snow or imposed load. In this paper this system effect is modelled and the statistic...... of variation. In the paper a stochastic model is described for the strength of a single piece of timber taking into account the stochastic variation of the strength and stiffness with length. Also stochastic models for different types of loads are formulated. First, simple representative systems with different...... types of redundancy and non-linearity are considered. The statistical characteristics of the load bearing capacity are determined by reliability analysis. Next, more complex systems are considered modelling the mechanical behaviour of timber roof elements I stressed skin panels made of timber. Using...
Human reliability analysis of dependent events
Swain, A.D.; Guttmann, H.E.
1977-01-01
In the human reliability analysis in WASH-1400, the continuous variable of degree of interaction among human events was approximated by selecting four points on this continuum to represent the entire continuum. The four points selected were identified as zero coupling (i.e., zero dependence), complete coupling (i.e., complete dependence), and two intermediate points--loose coupling (a moderate level of dependence) and tight coupling (a high level of dependence). The paper expands the WASH-1400 treatment of common mode failure due to the interaction of human activities. Mathematical expressions for the above four levels of dependence are derived for parallel and series systems. The psychological meaning of each level of dependence is illustrated by examples, with probability tree diagrams to illustrate the use of conditional probabilities resulting from the interaction of human actions in nuclear power plant tasks
Reliability analysis of steel-containment strength
Greimann, L.G.; Fanous, F.; Wold-Tinsae, A.; Ketalaar, D.; Lin, T.; Bluhm, D.
1982-06-01
A best estimate and uncertainty assessment of the resistance of the St. Lucie, Cherokee, Perry, WPPSS and Browns Ferry containment vessels was performed. The Monte Carlo simulation technique and second moment approach were compared as a means of calculating the probability distribution of the containment resistance. A uniform static internal pressure was used and strain ductility was taken as the failure criterion. Approximate methods were developed and calibrated with finite element analysis. Both approximate and finite element analyses were performed on the axisymmetric containment structure. An uncertainty assessment of the containment strength was then performed by the second moment reliability method. Based upon the approximate methods, the cumulative distribution for the resistance of each of the five containments (shell modes only) is presented
Standardizing the practice of human reliability analysis
Hallbert, B.P.
1993-01-01
The practice of human reliability analysis (HRA) within the nuclear industry varies greatly in terms of posited mechanisms that shape human performance, methods of characterizing and analytically modeling human behavior, and the techniques that are employed to estimate the frequency with which human error occurs. This variation has been a source of contention among HRA practitioners regarding the validity of results obtained from different HRA methods. It has also resulted in attempts to develop standard methods and procedures for conducting HRAs. For many of the same reasons, the practice of HRA has not been standardized or has been standardized only to the extent that individual analysts have developed heuristics and consistent approaches in their practice of HRA. From the standpoint of consumers and regulators, this has resulted in a lack of clear acceptance criteria for the assumptions, modeling, and quantification of human errors in probabilistic risk assessments
Human Reliability Analysis for Small Modular Reactors
Ronald L. Boring; David I. Gertman
2012-06-01
Because no human reliability analysis (HRA) method was specifically developed for small modular reactors (SMRs), the application of any current HRA method to SMRs represents tradeoffs. A first- generation HRA method like THERP provides clearly defined activity types, but these activity types do not map to the human-system interface or concept of operations confronting SMR operators. A second- generation HRA method like ATHEANA is flexible enough to be used for SMR applications, but there is currently insufficient guidance for the analyst, requiring considerably more first-of-a-kind analyses and extensive SMR expertise in order to complete a quality HRA. Although no current HRA method is optimized to SMRs, it is possible to use existing HRA methods to identify errors, incorporate them as human failure events in the probabilistic risk assessment (PRA), and quantify them. In this paper, we provided preliminary guidance to assist the human reliability analyst and reviewer in understanding how to apply current HRA methods to the domain of SMRs. While it is possible to perform a satisfactory HRA using existing HRA methods, ultimately it is desirable to formally incorporate SMR considerations into the methods. This may require the development of new HRA methods. More practicably, existing methods need to be adapted to incorporate SMRs. Such adaptations may take the form of guidance on the complex mapping between conventional light water reactors and small modular reactors. While many behaviors and activities are shared between current plants and SMRs, the methods must adapt if they are to perform a valid and accurate analysis of plant personnel performance in SMRs.
Task Decomposition in Human Reliability Analysis
Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory
2014-06-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.
Reliability of the Hazelbaker Assessment Tool for Lingual Frenulum Function
James Jennifer P
2006-03-01
Full Text Available Abstract Background About 3% of infants are born with a tongue-tie which may lead to breastfeeding problems such as ineffective latch, painful attachment or poor weight gain. The Hazelbaker Assessment Tool for Lingual Frenulum Function (HATLFF has been developed to give a quantitative assessment of the tongue-tie and recommendation about frenotomy (release of the frenulum. The aim of this study was to assess the inter-rater reliability of the HATLFF. Methods Fifty-eight infants referred to the Breastfeeding Education and Support Services (BESS at The Royal Women's Hospital for assessment of tongue-tie and 25 control infants were assessed by two clinicians independently. Results The Appearance items received kappas between about 0.4 to 0.6, which represents "moderate" reliability. The first three Function items (lateralization, lift and extension of tongue had kappa values over 0.65 which indicates "substantial" agreement. The four Function items relating to infant sucking (spread, cupping, peristalsis and snapback received low kappa values with insignificant p values. There was 96% agreement between the two assessors on the recommendation for frenotomy (kappa 0.92, excellent agreement. The study found that the Function Score can be more simply assessed using only the first three function items (ie not scoring the sucking items, with a cut-off of ≤4 for recommendation of frenotomy. Conclusion We found that the HATLFF has a high reliability in a study of infants with tongue-tie and control infants
Modeling human reliability analysis using MIDAS
Boring, R. L.
2006-01-01
This paper documents current efforts to infuse human reliability analysis (HRA) into human performance simulation. The Idaho National Laboratory is teamed with NASA Ames Research Center to bridge the SPAR-H HRA method with NASA's Man-machine Integration Design and Analysis System (MIDAS) for use in simulating and modeling the human contribution to risk in nuclear power plant control room operations. It is anticipated that the union of MIDAS and SPAR-H will pave the path for cost-effective, timely, and valid simulated control room operators for studying current and next generation control room configurations. This paper highlights considerations for creating the dynamic HRA framework necessary for simulation, including event dependency and granularity. This paper also highlights how the SPAR-H performance shaping factors can be modeled in MIDAS across static, dynamic, and initiator conditions common to control room scenarios. This paper concludes with a discussion of the relationship of the workload factors currently in MIDAS and the performance shaping factors in SPAR-H. (authors)
How reliable are Functional Movement Screening scores? A systematic review of rater reliability.
Moran, Robert W; Schneiders, Anthony G; Major, Katherine M; Sullivan, S John
2016-05-01
Several physical assessment protocols to identify intrinsic risk factors for injury aetiology related to movement quality have been described. The Functional Movement Screen (FMS) is a standardised, field-expedient test battery intended to assess movement quality and has been used clinically in preparticipation screening and in sports injury research. To critically appraise and summarise research investigating the reliability of scores obtained using the FMS battery. Systematic literature review. Systematic search of Google Scholar, Scopus (including ScienceDirect and PubMed), EBSCO (including Academic Search Complete, AMED, CINAHL, Health Source: Nursing/Academic Edition), MEDLINE and SPORTDiscus. Studies meeting eligibility criteria were assessed by 2 reviewers for risk of bias using the Quality Appraisal of Reliability Studies checklist. Overall quality of evidence was determined using van Tulder's levels of evidence approach. 12 studies were appraised. Overall, there was a 'moderate' level of evidence in favour of 'acceptable' (intraclass correlation coefficient ≥0.6) inter-rater and intra-rater reliability for composite scores derived from live scoring. For inter-rater reliability of composite scores derived from video recordings there was 'conflicting' evidence, and 'limited' evidence for intra-rater reliability. For inter-rater reliability based on live scoring of individual subtests there was 'moderate' evidence of 'acceptable' reliability (κ≥0.4) for 4 subtests (Deep Squat, Shoulder Mobility, Active Straight-leg Raise, Trunk Stability Push-up) and 'conflicting' evidence for the remaining 3 (Hurdle Step, In-line Lunge, Rotary Stability). This review found 'moderate' evidence that raters can achieve acceptable levels of inter-rater and intra-rater reliability of composite FMS scores when using live ratings. Overall, there were few high-quality studies, and the quality of several studies was impacted by poor study reporting particularly in relation to
On Improving Reliability of SRAM-Based Physically Unclonable Functions
Arunkumar Vijayakumar
2017-01-01
Full Text Available Physically unclonable functions (PUFs have been touted for their inherent resistance to invasive attacks and low cost in providing a hardware root of trust for various security applications. SRAM PUFs in particular are popular in industry for key/ID generation. Due to intrinsic process variations, SRAM cells, ideally, tend to have the same start-up behavior. SRAM PUFs exploit this start-up behavior. Unfortunately, not all SRAM cells exhibit reliable start-up behavior due to noise susceptibility. Hence, design enhancements are needed for improving reliability. Some of the proposed enhancements in literature include fuzzy extraction, error-correcting codes and voting mechanisms. All enhancements involve a trade-off between area/power/performance overhead and PUF reliability. This paper presents a design enhancement technique for reliability that improves upon previous solutions. We present simulation results to quantify improvement in SRAM PUF reliability and efficiency. The proposed technique is shown to generate a 128-bit key in ≤0.2 μ s at an area estimate of 4538 μ m 2 with error rate as low as 10 − 6 for intrinsic error probability of 15%.
Reliability Analysis and Optimal Design of Monolithic Vertical Wall Breakwaters
Sørensen, John Dalsgaard; Burcharth, Hans F.; Christiani, E.
1994-01-01
Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified...
Lofgren, E.V.
1985-08-01
This course in System Reliability and Analysis Techniques focuses on the quantitative estimation of reliability at the systems level. Various methods are reviewed, but the structure provided by the fault tree method is used as the basis for system reliability estimates. The principles of fault tree analysis are briefly reviewed. Contributors to system unreliability and unavailability are reviewed, models are given for quantitative evaluation, and the requirements for both generic and plant-specific data are discussed. Also covered are issues of quantifying component faults that relate to the systems context in which the components are embedded. All reliability terms are carefully defined. 44 figs., 22 tabs
HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM
Zahirah Alifia Maulida
2015-01-01
Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human
STARS software tool for analysis of reliability and safety
Poucet, A.; Guagnini, E.
1989-01-01
This paper reports on the STARS (Software Tool for the Analysis of Reliability and Safety) project aims at developing an integrated set of Computer Aided Reliability Analysis tools for the various tasks involved in systems safety and reliability analysis including hazard identification, qualitative analysis, logic model construction and evaluation. The expert system technology offers the most promising perspective for developing a Computer Aided Reliability Analysis tool. Combined with graphics and analysis capabilities, it can provide a natural engineering oriented environment for computer assisted reliability and safety modelling and analysis. For hazard identification and fault tree construction, a frame/rule based expert system is used, in which the deductive (goal driven) reasoning and the heuristic, applied during manual fault tree construction, is modelled. Expert system can explain their reasoning so that the analyst can become aware of the why and the how results are being obtained. Hence, the learning aspect involved in manual reliability and safety analysis can be maintained and improved
Space Mission Human Reliability Analysis (HRA) Project
Boyer, Roger
2014-01-01
The purpose of the Space Mission Human Reliability Analysis (HRA) Project is to extend current ground-based HRA risk prediction techniques to a long-duration, space-based tool. Ground-based HRA methodology has been shown to be a reasonable tool for short-duration space missions, such as Space Shuttle and lunar fly-bys. However, longer-duration deep-space missions, such as asteroid and Mars missions, will require the crew to be in space for as long as 400 to 900 day missions with periods of extended autonomy and self-sufficiency. Current indications show higher risk due to fatigue, physiological effects due to extended low gravity environments, and others, may impact HRA predictions. For this project, Safety & Mission Assurance (S&MA) will work with Human Health & Performance (HH&P) to establish what is currently used to assess human reliabiilty for human space programs, identify human performance factors that may be sensitive to long duration space flight, collect available historical data, and update current tools to account for performance shaping factors believed to be important to such missions. This effort will also contribute data to the Human Performance Data Repository and influence the Space Human Factors Engineering research risks and gaps (part of the HRP Program). An accurate risk predictor mitigates Loss of Crew (LOC) and Loss of Mission (LOM).The end result will be an updated HRA model that can effectively predict risk on long-duration missions.
Individual Differences in Human Reliability Analysis
Jeffrey C. Joe; Ronald L. Boring
2014-06-01
While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research has shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.
Failure and Reliability Analysis for the Master Pump Shutdown System
BEVINS, R.R.
2000-01-01
The Master Pump Shutdown System (MPSS) will be installed in the 200 Areas of the Hanford Site to monitor and control the transfer of liquid waste between tank farms and between the 200 West and 200 East areas through the Cross-Site Transfer Line. The Safety Function provided by the MPSS is to shutdown any waste transfer process within or between tank farms if a waste leak should occur along the selected transfer route. The MPSS, which provides this Safety Class Function, is composed of Programmable Logic Controllers (PLCs), interconnecting wires, relays, Human to Machine Interfaces (HMI), and software. These components are defined as providing a Safety Class Function and will be designated in this report as MPSS/PLC. Input signals to the MPSS/PLC are provided by leak detection systems from each of the tank farm leak detector locations along the waste transfer route. The combination of the MPSS/PLC, leak detection system, and transfer pump controller system will be referred to as MPSS/SYS. The components addressed in this analysis are associated with the MPSS/SYS. The purpose of this failure and reliability analysis is to address the following design issues of the Project Development Specification (PDS) for the MPSS/SYS (HNF 2000a): (1) Single Component Failure Criterion, (2) System Status Upon Loss of Electrical Power, (3) Physical Separation of Safety Class cables, (4) Physical Isolation of Safety Class Wiring from General Service Wiring, and (5) Meeting the MPSS/PLC Option 1b (RPP 1999) Reliability estimate. The failure and reliability analysis examined the system on a component level basis and identified any hardware or software elements that could fail and/or prevent the system from performing its intended safety function
Advances in human reliability analysis in Mexico
Nelson, Pamela F.; Gonzalez C, M.; Ruiz S, T.; Guillen M, D.; Contreras V, A.
2010-10-01
Human Reliability Analysis (HRA) is a very important part of Probabilistic Risk Analysis (PRA), and constant work is dedicated to improving methods, guidance and data in order to approach realism in the results as well as looking for ways to use these to reduce accident frequency at plants. Further, in order to advance in these areas, several HRA studies are being performed globally. Mexico has participated in the International HRA Empirical study with the objective of -benchmarking- HRA methods by comparing HRA predictions to actual crew performance in a simulator, as well as in the empirical study on a US nuclear power plant currently in progress. The focus of the first study was the development of an understanding of how methods are applied by various analysts, and characterize the methods for their capability to guide the analysts to identify potential human failures, and associated causes and performance shaping factors. The HRA benchmarking study has been performed by using the Halden simulator, 14 European crews, and 15 HRA equipment s (NRC, EPRI, and foreign HRA equipment s using different HRA methods). This effort in Mexico is reflected through the work being performed on updating the Laguna Verde PRA to comply with the ASME PRA standard. In order to be considered an HRA with technical adequacy, that is, be considered as a capability category II, for risk-informed applications, the methodology used for the HRA in the original PRA is not considered sufficiently detailed, and the methodology had to upgraded. The HCR/CBDT/THERP method was chosen, since this is used in many nuclear plants with similar design. The HRA update includes identification and evaluation of human errors that can occur during testing and maintenance, as well as human errors that can occur during an accident using the Emergency Operating Procedures. The review of procedures for maintenance, surveillance and operation is a necessary step in HRA and provides insight into the possible
Weibull distribution in reliability data analysis in nuclear power plant
Ma Yingfei; Zhang Zhijian; Zhang Min; Zheng Gangyang
2015-01-01
Reliability is an important issue affecting each stage of the life cycle ranging from birth to death of a product or a system. The reliability engineering includes the equipment failure data processing, quantitative assessment of system reliability and maintenance, etc. Reliability data refers to the variety of data that describe the reliability of system or component during its operation. These data may be in the form of numbers, graphics, symbols, texts and curves. Quantitative reliability assessment is the task of the reliability data analysis. It provides the information related to preventing, detect, and correct the defects of the reliability design. Reliability data analysis under proceed with the various stages of product life cycle and reliability activities. Reliability data of Systems Structures and Components (SSCs) in Nuclear Power Plants is the key factor of probabilistic safety assessment (PSA); reliability centered maintenance and life cycle management. The Weibull distribution is widely used in reliability engineering, failure analysis, industrial engineering to represent manufacturing and delivery times. It is commonly used to model time to fail, time to repair and material strength. In this paper, an improved Weibull distribution is introduced to analyze the reliability data of the SSCs in Nuclear Power Plants. An example is given in the paper to present the result of the new method. The Weibull distribution of mechanical equipment for reliability data fitting ability is very strong in nuclear power plant. It's a widely used mathematical model for reliability analysis. The current commonly used methods are two-parameter and three-parameter Weibull distribution. Through comparison and analysis, the three-parameter Weibull distribution fits the data better. It can reflect the reliability characteristics of the equipment and it is more realistic to the actual situation. (author)
Investigation of MLE in nonparametric estimation methods of reliability function
Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo
2001-01-01
There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not
RELIABILITY ANALYSIS OF POWER DISTRIBUTION SYSTEMS
Popescu V.S.
2012-04-01
Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.
Condon, David; Revelle, William
2017-01-01
Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.
Reliability Analysis of the CERN Radiation Monitoring Electronic System CROME
AUTHOR|(CDS)2126870
For the new in-house developed CERN Radiation Monitoring Electronic System (CROME) a reliability analysis is necessary to ensure compliance with the statu-tory requirements regarding the Safety Integrity Level. The required Safety Integrity Level by IEC 60532 standard is SIL 2 (for the Safety Integrated Functions Measurement, Alarm Triggering and Interlock Triggering). The ﬁrst step of the reliability analysis was a system and functional analysis which served as basis for the implementation of the CROME system in the software “Iso-graph”. In the “Prediction” module of Isograph the failure rates of all components were calculated. Failure rates for passive components were calculated by the Military Standard 217 and failure rates for active components were obtained from lifetime tests by the manufacturers. The FMEA was carried out together with the board designers and implemented in the “FMECA” module of Isograph. The FMEA served as basis for the Fault Tree Analysis and the detection of weak points...
Reliability in perceptual analysis of voice quality.
Bele, Irene Velsvik
2005-12-01
This study focuses on speaking voice quality in male teachers (n = 35) and male actors (n = 36), who represent untrained and trained voice users, because we wanted to investigate normal and supranormal voices. In this study, both substantial and methodologic aspects were considered. It includes a method for perceptual voice evaluation, and a basic issue was rater reliability. A listening group of 10 listeners, 7 experienced speech-language therapists, and 3 speech-language therapist students evaluated the voices by 15 vocal characteristics using VA scales. Two sets of voice signals were investigated: text reading (2 loudness levels) and sustained vowel (3 levels). The results indicated a high interrater reliability for most perceptual characteristics. Connected speech was evaluated more reliably, especially at the normal level, but both types of voice signals were evaluated reliably, although the reliability for connected speech was somewhat higher than for vowels. Experienced listeners tended to be more consistent in their ratings than did the student raters. Some vocal characteristics achieved acceptable reliability even with a smaller panel of listeners. The perceptual characteristics grouped in 4 factors reflected perceptual dimensions.
User's manual of a support system for human reliability analysis
Yokobayashi, Masao; Tamura, Kazuo.
1995-10-01
Many kinds of human reliability analysis (HRA) methods have been developed. However, users are required to be skillful so as to use them, and also required complicated works such as drawing event tree (ET) and calculation of uncertainty bounds. Moreover, each method is not so complete that only one method of them is not enough to evaluate human reliability. Therefore, a personal computer (PC) based support system for HRA has been developed to execute HRA practically and efficiently. The system consists of two methods, namely, simple method and detailed one. The former uses ASEP that is a simplified THERP-technique, and combined method of OAT and HRA-ET/DeBDA is used for the latter. Users can select a suitable method for their purpose. Human error probability (HEP) data were collected and a database of them was built to use for the support system. This paper describes outline of the HRA methods, support functions and user's guide of the system. (author)
Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components
Berzonskis, Arvydas; Sørensen, John Dalsgaard
2016-01-01
in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....
Component reliability analysis for development of component reliability DB of Korean standard NPPs
Choi, S. Y.; Han, S. H.; Kim, S. H.
2002-01-01
The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs
Mathematical Methods in Survival Analysis, Reliability and Quality of Life
Huber, Catherine; Mesbah, Mounir
2008-01-01
Reliability and survival analysis are important applications of stochastic mathematics (probability, statistics and stochastic processes) that are usually covered separately in spite of the similarity of the involved mathematical theory. This title aims to redress this situation: it includes 21 chapters divided into four parts: Survival analysis, Reliability, Quality of life, and Related topics. Many of these chapters were presented at the European Seminar on Mathematical Methods for Survival Analysis, Reliability and Quality of Life in 2006.
Reliability demonstration test planning using bayesian analysis
Chandran, Senthil Kumar; Arul, John A.
2003-01-01
In Nuclear Power Plants, the reliability of all the safety systems is very critical from the safety viewpoint and it is very essential that the required reliability requirements be met while satisfying the design constraints. From practical experience, it is found that the reliability of complex systems such as Safety Rod Drive Mechanism is of the order of 10 -4 with an uncertainty factor of 10. To demonstrate the reliability of such systems is prohibitive in terms of cost and time as the number of tests needed is very large. The purpose of this paper is to develop a Bayesian reliability demonstrating testing procedure for exponentially distributed failure times with gamma prior distribution on the failure rate which can be easily and effectively used to demonstrate component/subsystem/system reliability conformance to stated requirements. The important questions addressed in this paper are: With zero failures, how long one should perform the tests and how many components are required to conclude with a given degree of confidence, that the component under test, meets the reliability requirement. The procedure is explained with an example. This procedure can also be extended to demonstrate with more number of failures. The approach presented is applicable for deriving test plans for demonstrating component failure rates of nuclear power plants, as the failure data for similar components are becoming available in existing plants elsewhere. The advantages of this procedure are the criterion upon which the procedure is based is simple and pertinent, the fitting of the prior distribution is an integral part of the procedure and is based on the use of information regarding two percentiles of this distribution and finally, the procedure is straightforward and easy to apply in practice. (author)
Reliability Analysis for Safety Grade PLC(POSAFE-Q)
Choi, Kyung Chul; Song, Seung Whan; Park, Gang Min; Hwang, Sung Jae
2012-01-01
Safety Grade PLC(Programmable Logic Controller), POSAFE-Q, was developed recently in accordance with nuclear regulatory and requirements. In this paper, describe reliability analysis for digital safety grade PLC (especially POSAFE-Q). Reliability analysis scope is Prediction, Calculation of MTBF (Mean Time Between Failure), FMEA (Failure Mode Effect Analysis), PFD (Probability of Failure on Demand). (author)
Probabilistic safety analysis and human reliability analysis. Proceedings. Working material
1996-01-01
An international meeting on Probabilistic Safety Assessment (PSA) and Human Reliability Analysis (HRA) was jointly organized by Electricite de France - Research and Development (EDF DER) and SRI International in co-ordination with the International Atomic Energy Agency. The meeting was held in Paris 21-23 November 1994. A group of international and French specialists in PSA and HRA participated at the meeting and discussed the state of the art and current trends in the following six topics: PSA Methodology; PSA Applications; From PSA to Dependability; Incident Analysis; Safety Indicators; Human Reliability. For each topic a background paper was prepared by EDF/DER and reviewed by the international group of specialists who attended the meeting. The results of this meeting provide a comprehensive overview of the most important questions related to the readiness of PSA for specific uses and areas where further research and development is required. Refs, figs, tabs
Probabilistic safety analysis and human reliability analysis. Proceedings. Working material
NONE
1997-12-31
An international meeting on Probabilistic Safety Assessment (PSA) and Human Reliability Analysis (HRA) was jointly organized by Electricite de France - Research and Development (EDF DER) and SRI International in co-ordination with the International Atomic Energy Agency. The meeting was held in Paris 21-23 November 1994. A group of international and French specialists in PSA and HRA participated at the meeting and discussed the state of the art and current trends in the following six topics: PSA Methodology; PSA Applications; From PSA to Dependability; Incident Analysis; Safety Indicators; Human Reliability. For each topic a background paper was prepared by EDF/DER and reviewed by the international group of specialists who attended the meeting. The results of this meeting provide a comprehensive overview of the most important questions related to the readiness of PSA for specific uses and areas where further research and development is required. Refs, figs, tabs.
Solid Rocket Booster Large Main and Drogue Parachute Reliability Analysis
Clifford, Courtenay B.; Hengel, John E.
2009-01-01
The parachutes on the Space Transportation System (STS) Solid Rocket Booster (SRB) are the means for decelerating the SRB and allowing it to impact the water at a nominal vertical velocity of 75 feet per second. Each SRB has one pilot, one drogue, and three main parachutes. About four minutes after SRB separation, the SRB nose cap is jettisoned, deploying the pilot parachute. The pilot chute then deploys the drogue parachute. The drogue chute provides initial deceleration and proper SRB orientation prior to frustum separation. At frustum separation, the drogue pulls the frustum from the SRB and allows the main parachutes that are mounted in the frustum to unpack and inflate. These chutes are retrieved, inspected, cleaned, repaired as needed, and returned to the flight inventory and reused. Over the course of the Shuttle Program, several improvements have been introduced to the SRB main parachutes. A major change was the replacement of the small (115 ft. diameter) main parachutes with the larger (136 ft. diameter) main parachutes. Other modifications were made to the main parachutes, main parachute support structure, and SRB frustum to eliminate failure mechanisms, improve damage tolerance, and improve deployment and inflation characteristics. This reliability analysis is limited to the examination of the SRB Large Main Parachute (LMP) and drogue parachute failure history to assess the reliability of these chutes. From the inventory analysis, 68 Large Main Parachutes were used in 651 deployments, and 7 chute failures occurred in the 651 deployments. Logistic regression was used to analyze the LMP failure history, and it showed that reliability growth has occurred over the period of use resulting in a current chute reliability of R = .9983. This result was then used to determine the reliability of the 3 LMPs on the SRB, when all must function. There are 29 drogue parachutes that were used in 244 deployments, and no in-flight failures have occurred. Since there are no
Reliability and availability requirements analysis for DEMO: fuel cycle system
Pinna, T.; Borgognoni, F.
2015-01-01
The Demonstration Power Plant (DEMO) will be a fusion reactor prototype designed to demonstrate the capability to produce electrical power in a commercially acceptable way. Two of the key elements of the engineering development of the DEMO reactor are the definitions of reliability and availability requirements (or targets). The availability target for a hypothesized Fuel Cycle has been analysed as a test case. The analysis has been done on the basis of the experience gained in operating existing tokamak fusion reactors and developing the ITER design. Plant Breakdown Structure (PBS) and Functional Breakdown Structure (FBS) related to the DEMO Fuel Cycle and correlations between PBS and FBS have been identified. At first, a set of availability targets has been allocated to the various systems on the basis of their operating, protection and safety functions. 75% and 85% of availability has been allocated to the operating functions of fuelling system and tritium plant respectively. 99% of availability has been allocated to the overall systems in executing their safety functions. The chances of the systems to achieve the allocated targets have then been investigated through a Failure Mode and Effect Analysis and Reliability Block Diagram analysis. The following results have been obtained: 1) the target of 75% for the operations of the fuelling system looks reasonable, while the target of 85% for the operations of the whole tritium plant should be reduced to 80%, even though all the tritium plant systems can individually reach quite high availability targets, over 90% - 95%; 2) all the DEMO Fuel Cycle systems can reach the target of 99% in accomplishing their safety functions. (authors)
Clifton, G.G.; Anderson, C.; McMahon, G.; Vargas, R.; Wallin, J.D.
1989-01-01
Glomerular filtration rate and effective renal plasma flow were measured in 170 subjects using monoexponential analysis of plasma disappearance curves for 99m Tc-DTPA and 131 I-iodohippurate after single injection. In the current study population, glomerular filtration rate and effective renal plasma flow decreased with increasing age, were less in females than in males, and were less in hypertensives than in normotensives. Differences in glomerular filtration rate according to age and sex in the current study were similar to those reported using traditional creatinine clearance methodology. Monoexponential treatment of plasma isotope disappearance gave reproducible values for both glomerular filtration rate and effective renal plasma flow when measured either during the day or on a daily basis. Intraindividual coefficient of variation was less than 10% for both 99m Tc-DTPA and 131 I-iodohippurate clearances derived from monoexponential analysis. These results demonstrate that monoexponential analysis of plasma disappearance curves for 99m Tc-DTPA and 131 I-iodohippurate after a single injection is a useful method for evaluating changes in renal hemodynamics either during chronic drug therapy or acutely after single dose administration
Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues
Ronald Laurids Boring
2010-11-01
This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.
Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues
Boring, Ronald Laurids
2010-01-01
This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.
TIGER reliability analysis in the DSN
Gunn, J. M.
1982-01-01
The TIGER algorithm, the inputs to the program and the output are described. TIGER is a computer program designed to simulate a system over a period of time to evaluate system reliability and availability. Results can be used in the Deep Space Network for initial spares provisioning and system evaluation.
Reliability analysis of reactor protection systems
Alsan, S.
1976-07-01
A theoretical mathematical study of reliability is presented and the concepts subsequently defined applied to the study of nuclear reactor safety systems. The theory is applied to investigations of the operational reliability of the Siloe reactor from the point of view of rod drop. A statistical study conducted between 1964 and 1971 demonstrated that most rod drop incidents arose from circumstances associated with experimental equipment (new set-ups). The reliability of the most suitable safety system for some recently developed experimental equipment is discussed. Calculations indicate that if all experimental equipment were equipped with these new systems, only 1.75 rod drop accidents would be expected to occur per year on average. It is suggested that all experimental equipment should be equipped with these new safety systems and tested every 21 days. The reliability of the new safety system currently being studied for the Siloe reactor was also investigated. The following results were obtained: definite failures must be detected immediately as a result of the disturbances produced; the repair time must not exceed a few hours; the equipment must be tested every week. Under such conditions, the rate of accidental rod drops is about 0.013 on average per year. The level of nondefinite failures is less than 10 -6 per hour and the level of nonprotection 1 hour per year. (author)
Bypassing BDD Construction for Reliability Analysis
Williams, Poul Frederick; Nikolskaia, Macha; Rauzy, Antoine
2000-01-01
In this note, we propose a Boolean Expression Diagram (BED)-based algorithm to compute the minimal p-cuts of boolean reliability models such as fault trees. BEDs make it possible to bypass the Binary Decision Diagram (BDD) construction, which is the main cost of fault tree assessment....
Reliability analysis of self-actuated shutdown system
Itooka, S.; Kumasaka, K.; Okabe, A.; Satoh, K.; Tsukui, Y.
1991-01-01
An analytical study was performed for the reliability of a self-actuated shutdown system (SASS) under the unprotected loss of flow (ULOF) event in a typical loop-type liquid metal fast breeder reactor (LMFBR) by the use of the response surface Monte Carlo analysis method. Dominant parameters for the SASS, such as Curie point characteristics, subassembly outlet coolant temperature, electromagnetic surface condition, etc., were selected and their probability density functions (PDFs) were determined by the design study information and experimental data. To get the response surface function (RSF) for the maximum coolant temperature, transient analyses of ULOF were performed by utilizing the experimental design method in the determination of analytical cases. Then, the RSF was derived by the multi-variable regression analysis. The unreliability of the SASS was evaluated as a probability that the maximum coolant temperature exceeded an acceptable level, employing the Monte Carlo calculation using the above PDFs and RSF. In this study, sensitivities to the dominant parameter were compared. The dispersion of subassembly outlet coolant temperature near the SASS-was found to be one of the most sensitive parameters. Fault tree analysis was performed using this value for the SASS in order to evaluate the shutdown system reliability. As a result of this study, the effectiveness of the SASS on the reliability improvement in the LMFBR shutdown system was analytically confirmed. This study has been performed as a part of joint research and development projects for DFBR under the sponsorship of the nine Japanese electric power companies, Electric Power Development Company and the Japan Atomic Power Company. (author)
A methodology to incorporate organizational factors into human reliability analysis
Li Pengcheng; Chen Guohua; Zhang Li; Xiao Dongsheng
2010-01-01
A new holistic methodology for Human Reliability Analysis (HRA) is proposed to model the effects of the organizational factors on the human reliability. Firstly, a conceptual framework is built, which is used to analyze the causal relationships between the organizational factors and human reliability. Then, the inference model for Human Reliability Analysis is built by combining the conceptual framework with Bayesian networks, which is used to execute the causal inference and diagnostic inference of human reliability. Finally, a case example is presented to demonstrate the specific application of the proposed methodology. The results show that the proposed methodology of combining the conceptual model with Bayesian Networks can not only easily model the causal relationship between organizational factors and human reliability, but in a given context, people can quantitatively measure the human operational reliability, and identify the most likely root causes or the prioritization of root causes caused human error. (authors)
Mechanical reliability analysis of tubes intended for hydrocarbons
Nahal, Mourad; Khelif, Rabia [Badji Mokhtar University, Annaba (Algeria)
2013-02-15
Reliability analysis constitutes an essential phase in any study concerning reliability. Many industrialists evaluate and improve the reliability of their products during the development cycle - from design to startup (design, manufacture, and exploitation) - to develop their knowledge on cost/reliability ratio and to control sources of failure. In this study, we obtain results for hardness, tensile, and hydrostatic tests carried out on steel tubes for transporting hydrocarbons followed by statistical analysis. Results obtained allow us to conduct a reliability study based on resistance request. Thus, index of reliability is calculated and the importance of the variables related to the tube is presented. Reliability-based assessment of residual stress effects is applied to underground pipelines under a roadway, with and without active corrosion. Residual stress has been found to greatly increase probability of failure, especially in the early stages of pipe lifetime.
Multisite Reliability of MR-Based Functional Connectivity
Noble, Stephanie; Scheinost, Dustin; Finn, Emily S.; Shen, Xilin; Papademetris, Xenophon; McEwen, Sarah C.; Bearden, Carrie E.; Addington, Jean; Goodyear, Bradley; Cadenhead, Kristin S.; Mirzakhanian, Heline; Cornblatt, Barbara A.; Olvet, Doreen M.; Mathalon, Daniel H.; McGlashan, Thomas H.; Perkins, Diana O.; Belger, Aysenil; Seidman, Larry J.; Thermenos, Heidi; Tsuang, Ming T.; van Erp, Theo G.M.; Walker, Elaine F.; Hamann, Stephan; Woods, Scott W.; Cannon, Tyrone D.; Constable, R. Todd
2016-01-01
Recent years have witnessed an increasing number of multisite MRI functional connectivity (fcMRI) studies. While multisite studies are an efficient way to speed up data collection and increase sample sizes, especially for rare clinical populations, any effects of site or MRI scanner could ultimately limit power and weaken results. Little data exists on the stability of functional connectivity measurements across sites and sessions. In this study, we assess the influence of site and session on resting state functional connectivity measurements in a healthy cohort of traveling subjects (8 subjects scanned twice at each of 8 sites) scanned as part of the North American Prodrome Longitudinal Study (NAPLS). Reliability was investigated in three types of connectivity analyses: (1) seed-based connectivity with posterior cingulate cortex (PCC), right motor cortex (RMC), and left thalamus (LT) as seeds; (2) the intrinsic connectivity distribution (ICD), a voxel-wise connectivity measure; and (3) matrix connectivity, a whole-brain, atlas-based approach assessing connectivity between nodes. Contributions to variability in connectivity due to subject, site, and day-of-scan were quantified and used to assess between-session (test-retest) reliability in accordance with Generalizability Theory. Overall, no major site, scanner manufacturer, or day-of-scan effects were found for the univariate connectivity analyses; instead, subject effects dominated relative to the other measured factors. However, summaries of voxel-wise connectivity were found to be sensitive to site and scanner manufacturer effects. For all connectivity measures, although subject variance was three times the site variance, the residual represented 60–80% of the variance, indicating that connectivity differed greatly from scan to scan independent of any of the measured factors (i.e., subject, site, and day-of-scan). Thus, for a single 5 min scan, reliability across connectivity measures was poor (ICC=0.07–0
Reliability analysis of RC containment structures under combined loads
Hwang, H.; Reich, M.; Kagami, S.
1984-01-01
This paper discusses a reliability analysis method and load combination design criteria for reinforced concrete containment structures under combined loads. The probability based reliability analysis method is briefly described. For load combination design criteria, derivations of the load factors for accidental pressure due to a design basis accident and safe shutdown earthquake (SSE) for three target limit state probabilities are presented
The reliability analysis of cutting tools in the HSM processes
W.S. Lin
2008-01-01
Purpose: This article mainly describe the reliability of the cutting tools in the high speed turning by normaldistribution model.Design/methodology/approach: A series of experimental tests have been done to evaluate the reliabilityvariation of the cutting tools. From experimental results, the tool wear distribution and the tool life are determined,and the tool life distribution and the reliability function of cutting tools are derived. Further, the reliability ofcutting tools at anytime for h...
Reliability analysis of PLC safety equipment
Yu, J.; Kim, J. Y. [Chungnam Nat. Univ., Daejeon (Korea, Republic of)
2006-06-15
FMEA analysis for Nuclear Safety Grade PLC, failure rate prediction for nuclear safety grade PLC, sensitivity analysis for components failure rate of nuclear safety grade PLC, unavailability analysis support for nuclear safety system.
Reliability analysis of PLC safety equipment
Yu, J.; Kim, J. Y.
2006-06-01
FMEA analysis for Nuclear Safety Grade PLC, failure rate prediction for nuclear safety grade PLC, sensitivity analysis for components failure rate of nuclear safety grade PLC, unavailability analysis support for nuclear safety system
Application of reliability analysis methods to the comparison of two safety circuits
Signoret, J.-P.
1975-01-01
Two circuits of different design, intended for assuming the ''Low Pressure Safety Injection'' function in PWR reactors are analyzed using reliability methods. The reliability analysis of these circuits allows the failure trees to be established and the failure probability derived. The dependence of these results on test use and maintenance is emphasized as well as critical paths. The great number of results obtained may allow a well-informed choice taking account of the reliability wanted for the type of circuits [fr
Advances in methods and applications of reliability and safety analysis
Fieandt, J.; Hossi, H.; Laakso, K.; Lyytikaeinen, A.; Niemelae, I.; Pulkkinen, U.; Pulli, T.
1986-01-01
The know-how of the reliability and safety design and analysis techniques of Vtt has been established over several years in analyzing the reliability in the Finnish nuclear power plants Loviisa and Olkiluoto. This experience has been later on applied and developed to be used in the process industry, conventional power industry, automation and electronics. VTT develops and transfers methods and tools for reliability and safety analysis to the private and public sectors. The technology transfer takes place in joint development projects with potential users. Several computer-aided methods, such as RELVEC for reliability modelling and analysis, have been developed. The tool developed are today used by major Finnish companies in the fields of automation, nuclear power, shipbuilding and electronics. Development of computer-aided and other methods needed in analysis of operating experience, reliability or safety is further going on in a number of research and development projects
Digital Processor Module Reliability Analysis of Nuclear Power Plant
Lee, Sang Yong; Jung, Jae Hyun; Kim, Jae Ho; Kim, Sung Hun
2005-01-01
The system used in plant, military equipment, satellite, etc. consists of many electronic parts as control module, which requires relatively high reliability than other commercial electronic products. Specially, Nuclear power plant related to the radiation safety requires high safety and reliability, so most parts apply to Military-Standard level. Reliability prediction method provides the rational basis of system designs and also provides the safety significance of system operations. Thus various reliability prediction tools have been developed in recent decades, among of them, the MI-HDBK-217 method has been widely used as a powerful tool for the prediction. In this work, It is explained that reliability analysis work for Digital Processor Module (DPM, control module of SMART) is performed by Parts Stress Method based on MIL-HDBK-217F NOTICE2. We are using the Relex 7.6 of Relex software corporation, because reliability analysis process requires enormous part libraries and data for failure rate calculation
Structural reliability analysis applied to pipeline risk analysis
Gardiner, M. [GL Industrial Services, Loughborough (United Kingdom); Mendes, Renato F.; Donato, Guilherme V.P. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)
2009-07-01
Quantitative Risk Assessment (QRA) of pipelines requires two main components to be provided. These are models of the consequences that follow from some loss of containment incident, and models for the likelihood of such incidents occurring. This paper describes how PETROBRAS have used Structural Reliability Analysis for the second of these, to provide pipeline- and location-specific predictions of failure frequency for a number of pipeline assets. This paper presents an approach to estimating failure rates for liquid and gas pipelines, using Structural Reliability Analysis (SRA) to analyze the credible basic mechanisms of failure such as corrosion and mechanical damage. SRA is a probabilistic limit state method: for a given failure mechanism it quantifies the uncertainty in parameters to mathematical models of the load-resistance state of a structure and then evaluates the probability of load exceeding resistance. SRA can be used to benefit the pipeline risk management process by optimizing in-line inspection schedules, and as part of the design process for new construction in pipeline rights of way that already contain multiple lines. A case study is presented to show how the SRA approach has recently been used on PETROBRAS pipelines and the benefits obtained from it. (author)
A methodology for strain-based fatigue reliability analysis
Zhao, Y.X.
2000-01-01
A significant scatter of the cyclic stress-strain (CSS) responses should be noted for a nuclear reactor material, 1Cr18Ni9Ti pipe-weld metal. Existence of the scatter implies that a random cyclic strain applied history will be introduced under any of the loading modes even a deterministic loading history. A non-conservative evaluation might be given in the practice without considering the scatter. A methodology for strain-based fatigue reliability analysis, which has taken into account the scatter, is developed. The responses are approximately modeled by probability-based CSS curves of Ramberg-Osgood relation. The strain-life data are modeled, similarly, by probability-based strain-life curves of Coffin-Manson law. The reliability assessment is constructed by considering interference of the random fatigue strain applied and capacity histories. Probability density functions of the applied and capacity histories are analytically given. The methodology could be conveniently extrapolated to the case of deterministic CSS relation as the existent methods did. Non-conservative evaluation of the deterministic CSS relation and availability of present methodology have been indicated by an analysis of the material test results
Systems reliability analysis for the national ignition facility
Majumdar, K.C.; Annese, C.E.; MacIntyre, A.T.; Sicherman, A.
1996-01-01
A Reliability, Availability and Maintainability (RAM) analysis was initiated for the National Ignition Facility (NIF). The NIF is an inertial confinement fusion research facility designed to achieve controlled thermonuclear reaction; the preferred site for the NIF is the Lawrence Livermore National Laboratory (LLNL). The NIF RAM analysis has three purposes: (1) to allocate top level reliability and availability goals for the systems, (2) to develop an operability model for optimum maintainability, and (3) to determine the achievability of the allocated goals of the RAM parameters for the NIF systems and the facility operation as a whole. An allocation model assigns the reliability and availability goals for front line and support systems by a top-down approach; reliability analysis uses a bottom-up approach to determine the system reliability and availability from component level to system level
Interactive reliability analysis project. FY 80 progress report
Rasmuson, D.M.; Shepherd, J.C.
1981-03-01
This report summarizes the progress to date in the interactive reliability analysis project. Purpose is to develop and demonstrate a reliability and safety technique that can be incorporated early in the design process. Details are illustrated in a simple example of a reactor safety system
Reliability analysis of grid connected small wind turbine power electronics
Arifujjaman, Md.; Iqbal, M.T.; Quaicoe, J.E.
2009-01-01
Grid connection of small permanent magnet generator (PMG) based wind turbines requires a power conditioning system comprising a bridge rectifier, a dc-dc converter and a grid-tie inverter. This work presents a reliability analysis and an identification of the least reliable component of the power conditioning system of such grid connection arrangements. Reliability of the configuration is analyzed for the worst case scenario of maximum conversion losses at a particular wind speed. The analysis reveals that the reliability of the power conditioning system of such PMG based wind turbines is fairly low and it reduces to 84% of initial value within one year. The investigation is further enhanced by identifying the least reliable component within the power conditioning system and found that the inverter has the dominant effect on the system reliability, while the dc-dc converter has the least significant effect. The reliability analysis demonstrates that a permanent magnet generator based wind energy conversion system is not the best option from the point of view of power conditioning system reliability. The analysis also reveals that new research is required to determine a robust power electronics configuration for small wind turbine conversion systems.
Reliability analysis of wind embedded power generation system for ...
This paper presents a method for Reliability Analysis of wind energy embedded in power generation system for Indian scenario. This is done by evaluating the reliability index, loss of load expectation, for the power generation system with and without integration of wind energy sources in the overall electric power system.
Analysis and assessment of water treatment plant reliability
Szpak Dawid
2017-03-01
Full Text Available The subject of the publication is the analysis and assessment of the reliability of the surface water treatment plant (WTP. In the study the one parameter method of reliability assessment was used. Based on the flow sheet derived from the water company the reliability scheme of the analysed WTP was prepared. On the basis of the daily WTP work report the availability index Kg for the individual elements included in the WTP, was determined. Then, based on the developed reliability scheme showing the interrelationships between elements, the availability index Kg for the whole WTP was determined. The obtained value of the availability index Kg was compared with the criteria values.
An integrated approach to human reliability analysis -- decision analytic dynamic reliability model
Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.
1999-01-01
The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis
Reliability analysis applied to structural tests
Diamond, P.; Payne, A. O.
1972-01-01
The application of reliability theory to predict, from structural fatigue test data, the risk of failure of a structure under service conditions because its load-carrying capability is progressively reduced by the extension of a fatigue crack, is considered. The procedure is applicable to both safe-life and fail-safe structures and, for a prescribed safety level, it will enable an inspection procedure to be planned or, if inspection is not feasible, it will evaluate the life to replacement. The theory has been further developed to cope with the case of structures with initial cracks, such as can occur in modern high-strength materials which are susceptible to the formation of small flaws during the production process. The method has been applied to a structure of high-strength steel and the results are compared with those obtained by the current life estimation procedures. This has shown that the conventional methods can be unconservative in certain cases, depending on the characteristics of the structure and the design operating conditions. The suitability of the probabilistic approach to the interpretation of the results from full-scale fatigue testing of aircraft structures is discussed and the assumptions involved are examined.
Application of DFM in human reliability analysis
Yu Shaojie; Zhao Jun; Tong Jiejuan
2011-01-01
Combining with ATHEANA, the possible to identify EFCs and UAs using DFM is studied; and then Steam Generator Tube Rupture (SGTR) accident is modeled and solved. Through inductive analysis, 26 Prime Implicants (PIs) are obtained and the meaning of results is interpreted; and one of PIs is similar to the accident scenario of human failure event in one nuclear power plant. Finally, this paper discusses the methods of quantifying PIs, analysis of Error of commission (EOC) and so on. (authors)
Monte Carlo methods for the reliability analysis of Markov systems
Buslik, A.J.
1985-01-01
This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator
Analysis of operating reliability of WWER-1000 unit
Bortlik, J.
1985-01-01
The nuclear power unit was divided into 33 technological units. Input data for reliability analysis were surveys of operating results obtained from the IAEA information system and certain indexes of the reliability of technological equipment determined using the Bayes formula. The missing reliability data for technological equipment were used from the basic variant. The fault tree of the WWER-1000 unit was determined for the peak event defined as the impossibility of reaching 100%, 75% and 50% of rated power. The period was observed of the nuclear power plant operation with reduced output owing to defect and the respective time needed for a repair of the equipment. The calculation of the availability of the WWER-1000 unit was made for different variant situations. Certain indexes of the operating reliability of the WWER-1000 unit which are the result of a detailed reliability analysis are tabulated for selected variants. (E.S.)
Simulation Approach to Mission Risk and Reliability Analysis, Phase I
National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...
Reliability analysis of digital I and C systems at KAERI
Kim, Man Cheol
2013-01-01
This paper provides an overview of the ongoing research activities on a reliability analysis of digital instrumentation and control (I and C) systems of nuclear power plants (NPPs) performed by the Korea Atomic Energy Research Institute (KAERI). The research activities include the development of a new safety-critical software reliability analysis method by integrating the advantages of existing software reliability analysis methods, a fault coverage estimation method based on fault injection experiments, and a new human reliability analysis method for computer-based main control rooms (MCRs) based on human performance data from the APR-1400 full-scope simulator. The research results are expected to be used to address various issues such as the licensing issues related to digital I and C probabilistic safety assessment (PSA) for advanced digital-based NPPs. (author)
PSA applications and piping reliability analysis: where do we stand?
Lydell, B.O.Y.
1997-01-01
This reviews a recently proposed framework for piping reliability analysis. The framework was developed to promote critical interpretations of operational data on pipe failures, and to support application-specific-parameter estimation
Reliability analysis of digital safety systems at nuclear power plants
Sopira Vladimir; Kovacs, Zoltan
2015-01-01
Reliability analysis of digital reactor protection systems built on the basis of TELEPERM XS is described, and experience gained by the Slovak RELKO company during the past 20 years in this domain is highlighted. (orig.)
reliability analysis of a two span floor designed according
user
deterministic approach, considering both ultimate and serviceability limit states. Reliability analysis of the floor ... loading, strength and stiffness parameters, dimensions .... to show that there is a direct relation between the failure probability (Pf) ...
Raket, Lars Lau
We propose a direction it the field of statistics which we will call functional object analysis. This subfields considers the analysis of functional objects defined on continuous domains. In this setting we will focus on model-based statistics, with a particularly emphasis on mixed......-effect formulations, where the observed functional signal is assumed to consist of both fixed and random functional effects. This thesis takes the initial steps toward the development of likelihood-based methodology for functional objects. We first consider analysis of functional data defined on high...
Reliability Analysis Of Fire System On The Industry Facility By Use Fameca Method
Sony T, D.T.; Situmorang, Johnny; Ismu W, Puradwi; Demon H; Mulyanto, Dwijo; Kusmono, Slamet; Santa, Sigit Asmara
2000-01-01
FAMECA is one of the analysis method to determine system reliability on the industry facility. Analysis is done by some procedure that is identification of component function, determination of failure mode, severity level and effect of their failure. Reliability value is determined by three combinations that is severity level, component failure value and critical component. Reliability of analysis has been done for fire system on the industry by FAMECA method. Critical component which identified is pump, air release valve, check valve, manual test valve, isolation valve, control system etc
Human reliability analysis in Loviisa probabilistic safety analysis
Illman, L.; Isaksson, J.; Makkonen, L.; Vaurio, J.K.; Vuorio, U.
1986-01-01
The human reliability analysis in the Loviisa PSA project is carried out for three major groups of errors in human actions: (A) errors made before an initiating event, (B) errors that initiate a transient and (C) errors made during transients. Recovery possibilities are also included in each group. The methods used or planned for each group are described. A simplified THERP approach is used for group A, with emphasis on test and maintenance error recovery aspects and dependencies between redundancies. For group B, task analyses and human factors assessments are made for startup, shutdown and operational transients, with emphasis on potential common cause initiators. For group C, both misdiagnosis and slow decision making are analyzed, as well as errors made in carrying out necessary or backup actions. New or advanced features of the methodology are described
Safety and reliability analysis based on nonprobabilistic methods
Kozin, I.O.; Petersen, K.E.
1996-01-01
Imprecise probabilities, being developed during the last two decades, offer a considerably more general theory having many advantages which make it very promising for reliability and safety analysis. The objective of the paper is to argue that imprecise probabilities are more appropriate tool for reliability and safety analysis, that they allow to model the behavior of nuclear industry objects more comprehensively and give a possibility to solve some problems unsolved in the framework of conventional approach. Furthermore, some specific examples are given from which we can see the usefulness of the tool for solving some reliability tasks
Furlong, Laura-Anne M; Harrison, Andrew J
2013-01-01
There are various limitations to existing methods of studying plantarflexor stretch-shortening cycle (SSC) function and muscle-tendon unit (MTU) mechanics, predominantly related to measurement validity and reliability. This study utilizes an innovative adaptation to a force sledge which isolates the plantarflexors and ankle for analysis. The aim of this study was to determine the sledge loading protocol to be used, most appropriate method of data analysis and measurement reliability in a group of healthy, non-injured subjects. Twenty subjects (11 males, 9 females; age: 23.5 ±2.3 years; height: 1.73 ±0.08 m; mass: 74.2 ±11.3 kg) completed 11 impacts at five different loadings rated on a scale of perceived exertion from 1 to 5, where 5 is a loading that the subject could only complete the 11 impacts using the adapted sledge. Analysis of impacts 4–8 or 5–7 using loading 2 provided consistent results that were highly reliable (single intra-class correlation, ICC > 0.85, average ICC > 0.95) and replicated kinematics found in hopping and running. Results support use of an adapted force sledge apparatus as an ecologically valid, reliable method of investigating plantarflexor SSC function and MTU mechanics in a dynamic controlled environment. (paper)
Statis Program Analysis for Reliable, Trusted Apps
2017-02-01
and prevent errors in their Java programs. The Checker Framework includes compiler plug-ins (“checkers”) that find bugs or verify their absence. It...versions of the Java language. 4.8 DATAFLOW FRAMEWORK The dataflow framework enables more accurate analysis of source code. (Despite their similar...names, the dataflow framework is independent of the (Information) Flow Checker of chapter 2.) In Java code, a given operation may be permitted or
Functional analysis and applications
Siddiqi, Abul Hasan
2018-01-01
This self-contained textbook discusses all major topics in functional analysis. Combining classical materials with new methods, it supplies numerous relevant solved examples and problems and discusses the applications of functional analysis in diverse fields. The book is unique in its scope, and a variety of applications of functional analysis and operator-theoretic methods are devoted to each area of application. Each chapter includes a set of problems, some of which are routine and elementary, and some of which are more advanced. The book is primarily intended as a textbook for graduate and advanced undergraduate students in applied mathematics and engineering. It offers several attractive features making it ideally suited for courses on functional analysis intended to provide a basic introduction to the subject and the impact of functional analysis on applied and computational mathematics, nonlinear functional analysis and optimization. It introduces emerging topics like wavelets, Gabor system, inverse pro...
Reliability analysis of digital based I and C system
Kang, I. S.; Cho, B. S.; Choi, M. J. [KOPEC, Yongin (Korea, Republic of)
1999-10-01
Rapidly, digital technology is being widely applied in replacing analog component installed in existing plant and designing new nuclear power plant for control and monitoring system in Korea as well as in foreign countries. Even though many merits of digital technology, it is being faced with a new problem of reliability assurance. The studies for solving this problem are being performed vigorously in foreign countries. The reliability of KNGR Engineered Safety Features Component Control System (ESF-CCS), digital based I and C system, was analyzed to verify fulfillment of the ALWR EPRI-URD requirement for reliability analysis and eliminate hazards in design applied new technology. The qualitative analysis using FMEA and quantitative analysis using reliability block diagram were performed. The results of analyses are shown in this paper.
Reliability analysis and utilization of PEMs in space application
Jiang, Xiujie; Wang, Zhihua; Sun, Huixian; Chen, Xiaomin; Zhao, Tianlin; Yu, Guanghua; Zhou, Changyi
2009-11-01
More and more plastic encapsulated microcircuits (PEMs) are used in space missions to achieve high performance. Since PEMs are designed for use in terrestrial operating conditions, the successful usage of PEMs in space harsh environment is closely related to reliability issues, which should be considered firstly. However, there is no ready-made methodology for PEMs in space applications. This paper discusses the reliability for the usage of PEMs in space. This reliability analysis can be divided into five categories: radiation test, radiation hardness, screening test, reliability calculation and reliability assessment. One case study is also presented to illuminate the details of the process, in which a PEM part is used in a joint space program Double-Star Project between the European Space Agency (ESA) and China. The influence of environmental constrains including radiation, humidity, temperature and mechanics on the PEM part has been considered. Both Double-Star Project satellites are still running well in space now.
Discrete event simulation versus conventional system reliability analysis approaches
Kozine, Igor
2010-01-01
Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...
Reliability analysis of dispersion nuclear fuel elements
Ding, Shurong; Jiang, Xin; Huo, Yongzhong; Li, Lin an
2008-03-01
Taking a dispersion fuel element as a special particle composite, the representative volume element is chosen to act as the research object. The fuel swelling is simulated through temperature increase. The large strain elastoplastic analysis is carried out for the mechanical behaviors using FEM. The results indicate that the fission swelling is simulated successfully; the thickness increments grow linearly with burnup; with increasing of burnup: (1) the first principal stresses at fuel particles change from tensile ones to compression ones, (2) the maximum Mises stresses at the particles transfer from the centers of fuel particles to the location close to the interfaces between the matrix and the particles, their values increase with burnup; the maximum Mises stresses at the matrix exist in the middle location between the two particles near the mid-plane along the length (or width) direction, and the maximum plastic strains are also at the above region.
Reliability analysis of dispersion nuclear fuel elements
Ding Shurong [Department of Mechanics and Engineering Science, Fudan University, Shanghai 200433 (China)], E-mail: dsr1971@163.com; Jiang Xin [Department of Mechanics and Engineering Science, Fudan University, Shanghai 200433 (China); Huo Yongzhong [Department of Mechanics and Engineering Science, Fudan University, Shanghai 200433 (China)], E-mail: yzhuo@fudan.edu.cn; Li Linan [Department of Mechanics, Tianjin University, Tianjin 300072 (China)
2008-03-15
Taking a dispersion fuel element as a special particle composite, the representative volume element is chosen to act as the research object. The fuel swelling is simulated through temperature increase. The large strain elastoplastic analysis is carried out for the mechanical behaviors using FEM. The results indicate that the fission swelling is simulated successfully; the thickness increments grow linearly with burnup; with increasing of burnup: (1) the first principal stresses at fuel particles change from tensile ones to compression ones, (2) the maximum Mises stresses at the particles transfer from the centers of fuel particles to the location close to the interfaces between the matrix and the particles, their values increase with burnup; the maximum Mises stresses at the matrix exist in the middle location between the two particles near the mid-plane along the length (or width) direction, and the maximum plastic strains are also at the above region.
Zanto, Theodore P; Pa, Judy; Gazzaley, Adam
2014-01-01
As the aging population grows, it has become increasingly important to carefully characterize amnestic mild cognitive impairment (aMCI), a preclinical stage of Alzheimer's disease (AD). Functional magnetic resonance imaging (fMRI) is a valuable tool for monitoring disease progression in selectively vulnerable brain regions associated with AD neuropathology. However, the reliability of fMRI data in longitudinal studies of older adults with aMCI is largely unexplored. To address this, aMCI participants completed two visual working tasks, a Delayed-Recognition task and a One-Back task, on three separate scanning sessions over a three-month period. Test-retest reliability of the fMRI blood oxygen level dependent (BOLD) activity was assessed using an intraclass correlation (ICC) analysis approach. Results indicated that brain regions engaged during the task displayed greater reliability across sessions compared to regions that were not utilized by the task. During task-engagement, differential reliability scores were observed across the brain such that the frontal lobe, medial temporal lobe, and subcortical structures exhibited fair to moderate reliability (ICC=0.3-0.6), while temporal, parietal, and occipital regions exhibited moderate to good reliability (ICC=0.4-0.7). Additionally, reliability across brain regions was more stable when three fMRI sessions were used in the ICC calculation relative to two fMRI sessions. In conclusion, the fMRI BOLD signal is reliable across scanning sessions in this population and thus a useful tool for tracking longitudinal change in observational and interventional studies in aMCI. © 2013.
Gilmanshin, I. R.; Kirpichnikov, A. P.
2017-09-01
In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.
Reliability analysis of protection system of advanced pressurized water reactor - APR 1400
Varde, P. V.; Choi, J. G.; Lee, D. Y.; Han, J. B.
2003-04-01
Reliability analysis was carried out for the protection system of the Korean Advanced Pressurized Water Reactor - APR 1400. The main focus of this study was the reliability analysis of digital protection system, however, towards giving an integrated statement of complete protection reliability an attempt has been made to include the shutdown devices and other related aspects based on the information available to date. The sensitivity analysis has been carried out for the critical components / functions in the system. Other aspects like importance analysis and human error reliability for the critical human actions form part of this work. The framework provided by this study and the results obtained shows that this analysis has potential to be utilized as part of risk informed approach for future design / regulatory applications
Durability reliability analysis for corroding concrete structures under uncertainty
Zhang, Hao
2018-02-01
This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.
Reliability analysis of HVDC grid combined with power flow simulations
Yang, Yongtao; Langeland, Tore; Solvik, Johan [DNV AS, Hoevik (Norway); Stewart, Emma [DNV KEMA, Camino Ramon, CA (United States)
2012-07-01
Based on a DC grid power flow solver and the proposed GEIR, we carried out reliability analysis for a HVDC grid test system proposed by CIGRE working group B4-58, where the failure statistics are collected from literature survey. The proposed methodology is used to evaluate the impact of converter configuration on the overall reliability performance of the HVDC grid, where the symmetrical monopole configuration is compared with the bipole with metallic return wire configuration. The results quantify the improvement on reliability by using the later alternative. (orig.)
Reliability analysis of neutron transport simulation using Monte Carlo method
Souza, Bismarck A. de; Borges, Jose C.
1995-01-01
This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs
Analysis of sodium valve reliability data at CREDO
Bott, T.F.; Haas, P.M.
1979-01-01
The Centralized Reliability Data Organization (CREDO) has been established at Oak Ridge National Laboratory (ORNL) by the Department of Energy to provide a centralized source of data for reliability/maintainabilty analysis of advanced reactor systems. The current schedule calls for develoment of the data system at a moderate pace, with the first major distribution of data in late FY-1980. Continuous long-term collection of engineering, operating, and event data has been initiated at EBR-II and FFTF
Reliability analysis and updating of deteriorating systems with subset simulation
Schneider, Ronald; Thöns, Sebastian; Straub, Daniel
2017-01-01
An efficient approach to reliability analysis of deteriorating structural systems is presented, which considers stochastic dependence among element deterioration. Information on a deteriorating structure obtained through inspection or monitoring is included in the reliability assessment through B...... is an efficient and robust sampling-based algorithm suitable for such analyses. The approach is demonstrated in two case studies considering a steel frame structure and a Daniels system subjected to high-cycle fatigue....
Use of COMCAN III in system design and reliability analysis
Rasmuson, D.M.; Shepherd, J.C.; Marshall, N.H.; Fitch, L.R.
1982-03-01
This manual describes the COMCAN III computer program and its use. COMCAN III is a tool that can be used by the reliability analyst performing a probabilistic risk assessment or by the designer of a system desiring improved performance and efficiency. COMCAN III can be used to determine minimal cut sets of a fault tree, to calculate system reliability characteristics, and to perform qualitative common cause failure analysis
Applications of majorization and Schur functions in reliability and life testing
Proschan, F.
1975-01-01
This is an expository paper presenting basic definitions and properties of majorization and Schur functions, and displaying a variety of applications of these concepts in reliability prediction and modelling, and in reliability inference and life testing
Ramsay, J O
1997-01-01
Scientists today collect samples of curves and other functional observations. This monograph presents many ideas and techniques for such data. Included are expressions in the functional domain of such classics as linear regression, principal components analysis, linear modelling, and canonical correlation analysis, as well as specifically functional techniques such as curve registration and principal differential analysis. Data arising in real applications are used throughout for both motivation and illustration, showing how functional approaches allow us to see new things, especially by exploiting the smoothness of the processes generating the data. The data sets exemplify the wide scope of functional data analysis; they are drwan from growth analysis, meterology, biomechanics, equine science, economics, and medicine. The book presents novel statistical technology while keeping the mathematical level widely accessible. It is designed to appeal to students, to applied data analysts, and to experienced researc...
Reliability analysis framework for computer-assisted medical decision systems
Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.
2007-01-01
We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional
Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong
1993-08-01
This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.
Reliability importance analysis of Markovian systems at steady state using perturbation analysis
Phuc Do Van [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France); Barros, Anne [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Berenguer, Christophe [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)
2008-11-15
Sensitivity analysis has been primarily defined for static systems, i.e. systems described by combinatorial reliability models (fault or event trees). Several structural and probabilistic measures have been proposed to assess the components importance. For dynamic systems including inter-component and functional dependencies (cold spare, shared load, shared resources, etc.), and described by Markov models or, more generally, by discrete events dynamic systems models, the problem of sensitivity analysis remains widely open. In this paper, the perturbation method is used to estimate an importance factor, called multi-directional sensitivity measure, in the framework of Markovian systems. Some numerical examples are introduced to show why this method offers a promising tool for steady-state sensitivity analysis of Markov processes in reliability studies.
Reliability importance analysis of Markovian systems at steady state using perturbation analysis
Phuc Do Van; Barros, Anne; Berenguer, Christophe
2008-01-01
Sensitivity analysis has been primarily defined for static systems, i.e. systems described by combinatorial reliability models (fault or event trees). Several structural and probabilistic measures have been proposed to assess the components importance. For dynamic systems including inter-component and functional dependencies (cold spare, shared load, shared resources, etc.), and described by Markov models or, more generally, by discrete events dynamic systems models, the problem of sensitivity analysis remains widely open. In this paper, the perturbation method is used to estimate an importance factor, called multi-directional sensitivity measure, in the framework of Markovian systems. Some numerical examples are introduced to show why this method offers a promising tool for steady-state sensitivity analysis of Markov processes in reliability studies
Fundamentals of functional analysis
Farenick, Douglas
2016-01-01
This book provides a unique path for graduate or advanced undergraduate students to begin studying the rich subject of functional analysis with fewer prerequisites than is normally required. The text begins with a self-contained and highly efficient introduction to topology and measure theory, which focuses on the essential notions required for the study of functional analysis, and which are often buried within full-length overviews of the subjects. This is particularly useful for those in applied mathematics, engineering, or physics who need to have a firm grasp of functional analysis, but not necessarily some of the more abstruse aspects of topology and measure theory normally encountered. The reader is assumed to only have knowledge of basic real analysis, complex analysis, and algebra. The latter part of the text provides an outstanding treatment of Banach space theory and operator theory, covering topics not usually found together in other books on functional analysis. Written in a clear, concise manner,...
Test-retest reliability of trunk accelerometric gait analysis
Henriksen, Marius; Lund, Hans; Moe-Nilssen, R
2004-01-01
The purpose of this study was to determine the test-retest reliability of a trunk accelerometric gait analysis in healthy subjects. Accelerations were measured during walking using a triaxial accelerometer mounted on the lumbar spine of the subjects. Six men and 14 women (mean age 35.2; range 18...... a definite potential in clinical gait analysis....
Reliability Analysis of a Two Dissimilar Unit Cold Standby System ...
(2009) using linear first order differential equation evaluated the reliability and availability characteristics of two-dissimilar-unit cold standby system with three mode for which no cost benefit analysis was considered. El-said (1994) contributed on stochastic analysis of a two-dissimilar-unit standby redundant system.
Summary of the preparation of methodology for digital system reliability analysis for PSA purposes
Hustak, S.; Babic, P.
2001-12-01
The report is structured as follows: Specific features of and requirements for the digital part of NPP Instrumentation and Control (I and C) systems (Computer-controlled digital technologies and systems of the NPP I and C system; Specific types of digital technology failures and preventive provisions; Reliability requirements for the digital parts of I and C systems; Safety requirements for the digital parts of I and C systems; Defence-in-depth). Qualitative analyses of NPP I and C system reliability and safety (Introductory system analysis; Qualitative requirements for and proof of NPP I and C system reliability and safety). Quantitative reliability analyses of the digital parts of I and C systems (Selection of a suitable quantitative measure of digital system reliability; Selected qualitative and quantitative findings regarding digital system reliability; Use of relations among the occurrences of the various types of failure). Mathematical section in support of the calculation of the various types of indices (Boolean reliability models, Markovian reliability models). Example of digital system analysis (Description of a selected protective function and the relevant digital part of the I and C system; Functional chain examined, its components and fault tree). (P.A.)
Reshaping the Science of Reliability with the Entropy Function
Paolo Rocchi
2015-01-01
Full Text Available The present paper revolves around two argument points. As first, we have observed a certain parallel between the reliability of systems and the progressive disorder of thermodynamical systems; and we import the notion of reversibility/irreversibility into the reliability domain. As second, we note that the reliability theory is a very active area of research which although has not yet become a mature discipline. This is due to the majority of researchers who adopt the inductive logic instead of the deductive logic typical of mature scientific sectors. The deductive approach was inaugurated by Gnedenko in the reliability domain. We mean to continue Gnedenko’s work and we use the Boltzmann-like entropy to pursue this objective. This paper condenses the papers published in the past decade which illustrate the calculus of the Boltzmann-like entropy. It is demonstrated how the every result complies with the deductive logic and are consistent with Gnedenko’s achievements.
Sekir, U; Yildiz, Y; Hazneci, B; Ors, F; Saka, T; Aydin, T
2008-12-01
In contrast to the single evaluation methods used in the past, the combination of multiple tests allows one to obtain a global assessment of the ankle joint. The aim of this study was to determine the reliability of the different tests in a functional test battery. Twenty-four male recreational athletes with unilateral functional ankle instability (FAI) were recruited for this study. One component of the test battery included five different functional ability tests. These tests included a single limb hopping course, single-legged and triple-legged hop for distance, and six and cross six meter hop for time. The ankle joint position sense and one leg standing test were used for evaluation of proprioception and sensorimotor control. The isokinetic strengths of the ankle invertor and evertor muscles were evaluated at a velocity of 120 degrees /s. The reliability of the test battery was assessed by calculating the intraclass correlation coefficient (ICC). Each subject was tested two times, with an interval of 3-5 days between the test sessions. The ICCs for ankle functional and proprioceptive ability showed high reliability (ICCs ranging from 0.94 to 0.98). Additionally, isokinetic ankle joint inversion and eversion strength measurements represented good to high reliability (ICCs between 0.82 and 0.98). The functional test battery investigated in this study proved to be a reliable tool for the assessment of athletes with functional ankle instability. Therefore, clinicians may obtain reliable information from the functional test battery during the assessment of ankle joint performance in patients with functional ankle instability.
Reliability measures in item response theory: manifest versus latent correlation functions.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Verbeke, Geert; De Boeck, Paul
2015-02-01
For item response theory (IRT) models, which belong to the class of generalized linear or non-linear mixed models, reliability at the scale of observed scores (i.e., manifest correlation) is more difficult to calculate than latent correlation based reliability, but usually of greater scientific interest. This is not least because it cannot be calculated explicitly when the logit link is used in conjunction with normal random effects. As such, approximations such as Fisher's information coefficient, Cronbach's α, or the latent correlation are calculated, allegedly because it is easy to do so. Cronbach's α has well-known and serious drawbacks, Fisher's information is not meaningful under certain circumstances, and there is an important but often overlooked difference between latent and manifest correlations. Here, manifest correlation refers to correlation between observed scores, while latent correlation refers to correlation between scores at the latent (e.g., logit or probit) scale. Thus, using one in place of the other can lead to erroneous conclusions. Taylor series based reliability measures, which are based on manifest correlation functions, are derived and a careful comparison of reliability measures based on latent correlations, Fisher's information, and exact reliability is carried out. The latent correlations are virtually always considerably higher than their manifest counterparts, Fisher's information measure shows no coherent behaviour (it is even negative in some cases), while the newly introduced Taylor series based approximations reflect the exact reliability very closely. Comparisons among the various types of correlations, for various IRT models, are made using algebraic expressions, Monte Carlo simulations, and data analysis. Given the light computational burden and the performance of Taylor series based reliability measures, their use is recommended. © 2014 The British Psychological Society.
Reliability analysis for Atucha II reactor protection system signals
Roca, Jose Luis
1996-01-01
Atucha II is a 745 MW Argentine Power Nuclear Reactor constructed by ENACE SA, Nuclear Argentine Company for Electrical Power Generation and SIEMENS AG KWU, Erlangen, Germany. A preliminary modular logic analysis of RPS (Reactor Protection System) signals was performed by means of the well known Swedish professional risk and reliability software named Risk-Spectrum taking as a basis a reference signal coded as JR17ER003 which command the two moderator loops valves. From the reliability and behavior knowledge for this reference signal follows an estimation of the reliability for the other 97 RPS signals. Because the preliminary character of this analysis Main Important Measures are not performed at this stage. Reliability is by the statistic value named unavailability predicted. The scope of this analysis is restricted from the measurement elements to the RPS buffer outputs. In the present context only one redundancy is analyzed so in the Instrumentation and Control area there no CCF (Common Cause Failures) present for signals. Finally those unavailability values could be introduced in the failure domain for the posterior complete Atucha II reliability analysis which includes all mechanical and electromechanical features. Also an estimation of the spurious frequency of RPS signals defined as faulty by no trip is performed
Reliability analysis for Atucha II reactor protection system signals
Roca, Jose L.
2000-01-01
Atucha II is a 745 MW Argentine power nuclear reactor constructed by Nuclear Argentine Company for Electric Power Generation S.A. (ENACE S.A.) and SIEMENS AG KWU, Erlangen, Germany. A preliminary modular logic analysis of RPS (Reactor Protection System) signals was performed by means of the well known Swedish professional risk and reliability software named Risk-Spectrum taking as a basis a reference signal coded as JR17ER003 which command the two moderator loops valves. From the reliability and behavior knowledge for this reference signal follows an estimation of the reliability for the other 97 RPS signals. Because the preliminary character of this analysis Main Important Measures are not performed at this stage. Reliability is by the statistic value named unavailability predicted. The scope of this analysis is restricted from the measurement elements to the RPS buffer outputs. In the present context only one redundancy is analyzed so in the Instrumentation and Control area there no CCF (Common Cause Failures) present for signals. Finally those unavailability values could be introduced in the failure domain for the posterior complete Atucha II reliability analysis which includes all mechanical and electromechanical features. Also an estimation of the spurious frequency of RPS signals defined as faulty by no trip is performed. (author)
Human reliability analysis methods for probabilistic safety assessment
Pyy, P.
2000-11-01
Human reliability analysis (HRA) of a probabilistic safety assessment (PSA) includes identifying human actions from safety point of view, modelling the most important of them in PSA models, and assessing their probabilities. As manifested by many incidents and studies, human actions may have both positive and negative effect on safety and economy. Human reliability analysis is one of the areas of probabilistic safety assessment (PSA) that has direct applications outside the nuclear industry. The thesis focuses upon developments in human reliability analysis methods and data. The aim is to support PSA by extending the applicability of HRA. The thesis consists of six publications and a summary. The summary includes general considerations and a discussion about human actions in the nuclear power plant (NPP) environment. A condensed discussion about the results of the attached publications is then given, including new development in methods and data. At the end of the summary part, the contribution of the publications to good practice in HRA is presented. In the publications, studies based on the collection of data on maintenance-related failures, simulator runs and expert judgement are presented in order to extend the human reliability analysis database. Furthermore, methodological frameworks are presented to perform a comprehensive HRA, including shutdown conditions, to study reliability of decision making, and to study the effects of wrong human actions. In the last publication, an interdisciplinary approach to analysing human decision making is presented. The publications also include practical applications of the presented methodological frameworks. (orig.)
The development of a reliable amateur boxing performance analysis template.
Thomson, Edward; Lamb, Kevin; Nicholas, Ceri
2013-01-01
The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.
Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego
2017-12-01
Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.
Reliability analysis of cluster-based ad-hoc networks
Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel
2008-01-01
The mobile ad-hoc wireless network (MAWN) is a new and emerging network scheme that is being employed in a variety of applications. The MAWN varies from traditional networks because it is a self-forming and dynamic network. The MAWN is free of infrastructure and, as such, only the mobile nodes comprise the network. Pairs of nodes communicate either directly or through other nodes. To do so, each node acts, in turn, as a source, destination, and relay of messages. The virtue of a MAWN is the flexibility this provides; however, the challenge for reliability analyses is also brought about by this unique feature. The variability and volatility of the MAWN configuration makes typical reliability methods (e.g. reliability block diagram) inappropriate because no single structure or configuration represents all manifestations of a MAWN. For this reason, new methods are being developed to analyze the reliability of this new networking technology. New published methods adapt to this feature by treating the configuration probabilistically or by inclusion of embedded mobility models. This paper joins both methods together and expands upon these works by modifying the problem formulation to address the reliability analysis of a cluster-based MAWN. The cluster-based MAWN is deployed in applications with constraints on networking resources such as bandwidth and energy. This paper presents the problem's formulation, a discussion of applicable reliability metrics for the MAWN, and illustration of a Monte Carlo simulation method through the analysis of several example networks
Fatigue Reliability Analysis of a Mono-Tower Platform
Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune
1991-01-01
In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed thro...... of the natural period, damping ratio, current, stress spectrum and parameters describing the fatigue strength. Further, soil damping is shown to be significant for the Mono-tower.......In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed...... through linear-elastic fracture mechanics (LEFM). In determining the cumulative fatigue damage, Palmgren-Miner's rule is applied. Element reliability, as well as systems reliability, is estimated using first-order reliability methods (FORM). The sensitivity of the systems reliability to various parameters...
Kim, Tae Woon; Choi, Seong Soo; Lee, Dong Gue; Kim, Young Il
1999-12-01
The reliability data management system (RDMS) for safety systems of PHWR type plants has been developed and utilized in the reliability analysis of the special safety systems of Wolsong Unit 1,2 with plant overhaul period lengthened. The RDMS is developed for the periodic efficient reliability analysis of the safety systems of Wolsong Unit 1,2. In addition, this system provides the function of analyzing the effects on safety system unavailability if the test period of a test procedure changes as well as the function of optimizing the test periods of safety-related test procedures. The RDMS can be utilized in handling the requests of the regulatory institute actively with regard to the reliability validation of safety systems. (author)
State of the art report on aging reliability analysis
Choi, Sun Yeong; Yang, Joon Eon; Han, Sang Hoon; Ha, Jae Joo
2002-03-01
The goal of this report is to describe the state of the art on aging analysis methods to calculate the effects of component aging quantitatively. In this report, we described some aging analysis methods which calculate the increase of Core Damage Frequency (CDF) due to aging by including the influence of aging into PSA. We also described several research topics required for aging analysis for components of domestic NPPs. We have described a statistical model and reliability physics model which calculate the effect of aging quantitatively by using PSA method. It is expected that the practical use of the reliability-physics model will be increased though the process with the reliability-physics model is more complicated than statistical model
IEEE guide for the analysis of human reliability
Dougherty, E.M. Jr.
1987-01-01
The Institute of Electrical and Electronics Engineers (IEEE) working group 7.4 of the Human Factors and Control Facilities Subcommittee of the Nuclear Power Engineering Committee (NPEC) has released its fifth draft of a Guide for General Principles of Human Action Reliability Analysis for Nuclear Power Generating Stations, for approval of NPEC. A guide is the least mandating in the IEEE hierarchy of standards. The purpose is to enhance the performance of an human reliability analysis (HRA) as a part of a probabilistic risk assessment (PRA), to assure reproducible results, and to standardize documentation. The guide does not recommend or even discuss specific techniques, which are too rapidly evolving today. Considerable maturation in the analysis of human reliability in a PRA context has taken place in recent years. The IEEE guide on this subject is an initial step toward bringing HRA out of the research and development arena into the toolbox of standard engineering practices
Reliability Analysis of Wireless Sensor Networks Using Markovian Model
Jin Zhu
2012-01-01
Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.
Principle of maximum entropy for reliability analysis in the design of machine components
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Reliability of the Emergency Severity Index: Meta-analysis
Amir Mirhaghi
2015-01-01
Full Text Available Objectives: Although triage systems based on the Emergency Severity Index (ESI have many advantages in terms of simplicity and clarity, previous research has questioned their reliability in practice. Therefore, the aim of this meta-analysis was to determine the reliability of ESI triage scales. Methods: This metaanalysis was performed in March 2014. Electronic research databases were searched and articles conforming to the Guidelines for Reporting Reliability and Agreement Studies were selected. Two researchers independently examined selected abstracts. Data were extracted in the following categories: version of scale (latest/older, participants (adult/paediatric, raters (nurse, physician or expert, method of reliability (intra/inter-rater, reliability statistics (weighted/unweighted kappa and the origin and publication year of the study. The effect size was obtained by the Z-transformation of reliability coefficients. Data were pooled with random-effects models and a meta-regression was performed based on the method of moments estimator. Results: A total of 19 studies from six countries were included in the analysis. The pooled coefficient for the ESI triage scales was substantial at 0.791 (95% confidence interval: 0.787‒0.795. Agreement was higher with the latest and adult versions of the scale and among expert raters, compared to agreement with older and paediatric versions of the scales and with other groups of raters, respectively. Conclusion: ESI triage scales showed an acceptable level of overall reliability. However, ESI scales require more development in order to see full agreement from all rater groups. Further studies concentrating on other aspects of reliability assessment are needed.
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
A new approach for reliability analysis with time-variant performance characteristics
Wang, Zequn; Wang, Pingfeng
2013-01-01
Reliability represents safety level in industry practice and may variant due to time-variant operation condition and components deterioration throughout a product life-cycle. Thus, the capability to perform time-variant reliability analysis is of vital importance in practical engineering applications. This paper presents a new approach, referred to as nested extreme response surface (NERS), that can efficiently tackle time dependency issue in time-variant reliability analysis and enable to solve such problem by easily integrating with advanced time-independent tools. The key of the NERS approach is to build a nested response surface of time corresponding to the extreme value of the limit state function by employing Kriging model. To obtain the data for the Kriging model, the efficient global optimization technique is integrated with the NERS to extract the extreme time responses of the limit state function for any given system input. An adaptive response prediction and model maturation mechanism is developed based on mean square error (MSE) to concurrently improve the accuracy and computational efficiency of the proposed approach. With the nested response surface of time, the time-variant reliability analysis can be converted into the time-independent reliability analysis and existing advanced reliability analysis methods can be used. Three case studies are used to demonstrate the efficiency and accuracy of NERS approach
Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis
Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William
2009-01-01
This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).
Reliability analysis of reactor inspection robot(RIROB)
Eom, H. S.; Kim, J. H.; Lee, J. C.; Choi, Y. R.; Moon, S. S.
2002-05-01
This report describes the method and the result of the reliability analysis of RIROB developed in Korea Atomic Energy Research Institute. There are many classic techniques and models for the reliability analysis. These techniques and models have been used widely and approved in other industries such as aviation and nuclear industry. Though these techniques and models have been approved in real fields they are still insufficient for the complicated systems such RIROB which are composed of computer, networks, electronic parts, mechanical parts, and software. Particularly the application of these analysis techniques to digital and software parts of complicated systems is immature at this time thus expert judgement plays important role in evaluating the reliability of the systems at these days. In this report we proposed a method which combines diverse evidences relevant to the reliability to evaluate the reliability of complicated systems such as RIROB. The proposed method combines diverse evidences and performs inference in formal and in quantitative way by using the benefits of Bayesian Belief Nets (BBN)
Reliability analysis of nuclear containment without metallic liners against jet aircraft crash
Siddiqui, N.A.; Iqbal, M.A.; Abbas, H. E-mail: abbas_husain@hotmail.com; Paul, D.K
2003-09-01
The present study presents a methodology for detailed reliability analysis of nuclear containment without metallic liners against aircraft crash. For this purpose, a nonlinear limit state function has been derived using violation of tolerable crack width as failure criterion. This criterion has been considered as failure criterion because radioactive radiations may come out if size of crack becomes more than the tolerable crack width. The derived limit state uses the response of containment that has been obtained from a detailed dynamic analysis of nuclear containment under an impact of a large size Boeing jet aircraft. Using this response in conjunction with limit state function, the reliabilities and probabilities of failures are obtained at a number of vulnerable locations employing an efficient first-order reliability method (FORM). These values of reliability and probability of failure at various vulnerable locations are then used for the estimation of conditional and annual reliabilities of nuclear containment as a function of its location from the airport. To study the influence of the various random variables on containment reliability the sensitivity analysis has been performed. Some parametric studies have also been included to obtain the results of field and academic interest.
Structural reliability analysis under evidence theory using the active learning kriging model
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
RELOSS, Reliability of Safety System by Fault Tree Analysis
Allan, R.N.; Rondiris, I.L.; Adraktas, A.
1981-01-01
1 - Description of problem or function: Program RELOSS is used in the reliability/safety assessment of any complex system with predetermined operational logic in qualitative and (if required) quantitative terms. The program calculates the possible system outcomes following an abnormal operating condition and the probability of occurrence, if required. Furthermore, the program deduces the minimal cut or tie sets of the system outcomes and identifies the potential common mode failures. 4. Method of solution: The reliability analysis performed by the program is based on the event tree methodology. Using this methodology, the program develops the event tree of a system or a module of that system and relates each path of this tree to its qualitative and/or quantitative impact on specified system or module outcomes. If the system being analysed is subdivided into modules the program assesses each module in turn as described previously and then combines the module information to obtain results for the overall system. Having developed the event tree of a module or a system, the program identifies which paths lead or do not lead to various outcomes depending on whether the cut or the tie sets of the outcomes are required and deduces the corresponding sets. Furthermore the program identifies for a specific system outcome, the potential common mode failures and the cut or tie sets containing potential dependent failures of some components. 5. Restrictions on the complexity of the problem: The present dimensions of the program are as follows. They can however be easily modified: Maximum number of modules (equivalent components): 25; Maximum number of components in a module: 15; Maximum number of levels of parentheses in a logical statement: 10 Maximum number of system outcomes: 3; Maximum number of module outcomes: 2; Maximum number of points in time for which quantitative analysis is required: 5; Maximum order of any cut or tie set: 10; Maximum order of a cut or tie of any
Griffel, DH
2002-01-01
A stimulating introductory text, this volume examines many important applications of functional analysis to mechanics, fluid mechanics, diffusive growth, and approximation. Detailed enough to impart a thorough understanding, the text is also sufficiently straightforward for those unfamiliar with abstract analysis. Its four-part treatment begins with distribution theory and discussions of Green's functions. Essentially independent of the preceding material, the second and third parts deal with Banach spaces, Hilbert space, spectral theory, and variational techniques. The final part outlines the
Reliability-Based Robustness Analysis for a Croatian Sports Hall
Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard
2011-01-01
This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system....... A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of the remaining elements after the removal of selected critical elements. The robustness...... is expressed and evaluated by a robustness index. Next, the robustness is assessed using system reliability indices where the probabilistic failure model is modelled by a series system of parallel systems....
Distribution System Reliability Analysis for Smart Grid Applications
Aljohani, Tawfiq Masad
Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.
Reliability assessment of embedded digital system using multi-state function
Choi, Jong Gyun; Seong, Poong Hyun
2006-01-01
This work describes a combinatorial model for estimating the reliability of the embedded digital system by means of multi-state function. This model includes a coverage model for fault-handling techniques implemented in digital systems. The fault-handling techniques make it difficult for many types of components in digital system to be treated as binary state, good or bad. The multi-state function provides a complete analysis of multi-state systems as which the digital systems can be regarded. Through adaptation of software operational profile flow to multi-state function, the HW/SW interaction is also considered for estimation of the reliability of digital system. Using this model, we evaluate the reliability of one board controller in a digital system, Interposing Logic System (ILS), which is installed in YGN nuclear power units 3 and 4. Since the proposed model is a generalized combinatorial model, the simplification of this model becomes the conventional model that treats the system as binary state. This modeling method is particularly attractive for embedded systems in which small sized application software is implemented since it will require very laborious work for this method to be applied to systems with large software
Reliability analysis - systematic approach based on limited data
Bourne, A.J.
1975-11-01
The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)
Reliability of physical functioning tests in patients with low back pain: a systematic review.
Denteneer, Lenie; Van Daele, Ulrike; Truijen, Steven; De Hertogh, Willem; Meirte, Jill; Stassijns, Gaetane
2018-01-01
The aim of this study was to provide a comprehensive overview of physical functioning tests in patients with low back pain (LBP) and to investigate their reliability. A systematic computerized search was finalized in four different databases on June 24, 2017: PubMed, Web of Science, Embase, and MEDLINE. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed during all stages of this review. Clinical studies that investigate the reliability of physical functioning tests in patients with LBP were eligible. The methodological quality of the included studies was assessed with the use of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist. To come to final conclusions on the reliability of the identified clinical tests, the current review assessed three factors, namely, outcome assessment, methodological quality, and consistency of description. A total of 20 studies were found eligible and 38 clinical tests were identified. Good overall test-retest reliability was concluded for the extensor endurance test (intraclass correlation coefficient [ICC]=0.93-0.97), the flexor endurance test (ICC=0.90-0.97), the 5-minute walking test (ICC=0.89-0.99), the 50-ft walking test (ICC=0.76-0.96), the shuttle walk test (ICC=0.92-0.99), the sit-to-stand test (ICC=0.91-0.99), and the loaded forward reach test (ICC=0.74-0.98). For inter-rater reliability, only one test, namely, the Biering-Sörensen test (ICC=0.88-0.99), could be concluded to have an overall good inter-rater reliability. None of the identified clinical tests could be concluded to have a good intrarater reliability. Further investigation should focus on a better overall study methodology and the use of identical protocols for the description of clinical tests. The assessment of reliability is only a first step in the recommendation process for the use of clinical tests. In future research, the identified clinical tests in the
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
Human reliability analysis of Lingao Nuclear Power Station
Zhang Li; Huang Shudong; Yang Hong; He Aiwu; Huang Xiangrui; Zheng Tao; Su Shengbing; Xi Haiying
2001-01-01
The necessity of human reliability analysis (HRA) of Lingao Nuclear Power Station are analyzed, and the method and operation procedures of HRA is briefed. One of the human factors events (HFE) is analyzed in detail and some questions of HRA are discussed. The authors present the analytical results of 61 HFEs, and make a brief introduction of HRA contribution to Lingao Nuclear Power Station
System Reliability Analysis Capability and Surrogate Model Application in RAVEN
Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-11-01
This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.
Factorial validation and reliability analysis of the brain fag syndrome ...
Results: Two valid factors emerged with items 1-3 and items 4, 5 & 7 loading on respectively, making the BFSS a twodimensional (multidimensional) scale which measures 2 aspects of brain fag [labeled burning sensation and crawling sensation respectively]. The reliability analysis yielded a Cronbach Alpha coefficient of ...
Huh, Jae Sung; Kwak, Byung Man
2011-01-01
Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated
A study of operational and testing reliability in software reliability analysis
Yang, B.; Xie, M.
2000-01-01
Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated
ZERBERUS - the code for reliability analysis of crack containing structures
Cizelj, L.; Riesch-Oppermann, H.
1992-04-01
Brief description of the First- and Second Order Reliability Methods, being the theoretical background of the code, is given. The code structure is described in detail, with special emphasis to the new application fields. The numerical example investigates failure probability of steam generator tubing affected by stress corrosion cracking. The changes necessary to accommodate this analysis within the ZERBERUS code are explained. Analysis results are compared to different Monte Carlo techniques. (orig./HP) [de
Dietlmeier, W.; Gossner, S.; Gueldner, W.; Hoertner, H.; von Linden, J.; Preischl, W.; Reichart, G.; Spindler, H.; Volmer, G.; Zipf, G.
1981-01-01
Based on the event tree analysis as documented in the Appendix 1, the failure probabilities of the system functions required to control the initiating events are evaluated in this Appendix 2. The reliability investigations necessary for the evaluation of the event sequences are performed mostly by means of the fault tree analysis. The methods of the reliability analysis, the composition and function of the systems important to safety and the functional tests performed on these systems are dealt with in detail. The comprehensive documentation of the reliability analyses as performed for the internal events necessitated a division of this Appendix 2 into two volumes.
Accident Sequence Evaluation Program: Human reliability analysis procedure
Swain, A.D.
1987-02-01
This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.
Accident Sequence Evaluation Program: Human reliability analysis procedure
Swain, A.D.
1987-02-01
This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs
Inter comparison of REPAS and APSRA methodologies for passive system reliability analysis
Solanki, R.B.; Krishnamurthy, P.R.; Singh, Suneet; Varde, P.V.; Verma, A.K.
2014-01-01
The increasing use of passive systems in the innovative nuclear reactors puts demand on the estimation of the reliability assessment of these passive systems. The passive systems operate on the driving forces such as natural circulation, gravity, internal stored energy etc. which are moderately weaker than that of active components. Hence, phenomenological failures (virtual components) are equally important as that of equipment failures (real components) in the evaluation of passive systems reliability. The contribution of the mechanical components to the passive system reliability can be evaluated in a classical way using the available component reliability database and well known methods. On the other hand, different methods are required to evaluate the reliability of processes like thermohydraulics due to lack of adequate failure data. The research is ongoing worldwide on the reliability assessment of the passive systems and their integration into PSA, however consensus is not reached. Two of the most widely used methods are Reliability Evaluation of Passive Systems (REPAS) and Assessment of Passive System Reliability (APSRA). Both these methods characterize the uncertainties involved in the design and process parameters governing the function of the passive system. However, these methods differ in the quantification of passive system reliability. Inter comparison among different available methods provides useful insights into the strength and weakness of different methods. This paper highlights the results of the thermal hydraulic analysis of a typical passive isolation condenser system carried out using RELAP mode 3.2 computer code applying REPAS and APSRA methodologies. The failure surface is established for the passive system under consideration and system reliability has also been evaluated using these methods. Challenges involved in passive system reliabilities are identified, which require further attention in order to overcome the shortcomings of these
Maintenance management of railway infrastructures based on reliability analysis
Macchi, Marco; Garetti, Marco; Centrone, Domenico; Fumagalli, Luca; Piero Pavirani, Gian
2012-01-01
Railway infrastructure maintenance plays a crucial role for rail transport. It aims at guaranteeing safety of operations and availability of railway tracks and related equipment for traffic regulation. Moreover, it is one major cost for rail transport operations. Thus, the increased competition in traffic market is asking for maintenance improvement, aiming at the reduction of maintenance expenditures while keeping the safety of operations. This issue is addressed by the methodology presented in the paper. The first step of the methodology consists of a family-based approach for the equipment reliability analysis; its purpose is the identification of families of railway items which can be given the same reliability targets. The second step builds the reliability model of the railway system for identifying the most critical items, given a required service level for the transportation system. The two methods have been implemented and tested in practical case studies, in the context of Rete Ferroviaria Italiana, the Italian public limited company for railway transportation.
An application of the fault tree analysis for the power system reliability estimation
Volkanovski, A.; Cepin, M.; Mavko, B.
2007-01-01
The power system is a complex system with its main function to produce, transfer and provide consumers with electrical energy. Combinations of failures of components in the system can result in a failure of power delivery to certain load points and in some cases in a full blackout of power system. The power system reliability directly affects safe and reliable operation of nuclear power plants because the loss of offsite power is a significant contributor to the core damage frequency in probabilistic safety assessments of nuclear power plants. The method, which is based on the integration of the fault tree analysis with the analysis of the power flows in the power system, was developed and implemented for power system reliability assessment. The main contributors to the power system reliability are identified, both quantitatively and qualitatively. (author)
An exact method for solving logical loops in reliability analysis
Matsuoka, Takeshi
2009-01-01
This paper presents an exact method for solving logical loops in reliability analysis. The systems that include logical loops are usually described by simultaneous Boolean equations. First, present a basic rule of solving simultaneous Boolean equations. Next, show the analysis procedures for three-component system with external supports. Third, more detailed discussions are given for the establishment of logical loop relation. Finally, take up two typical structures which include more than one logical loop. Their analysis results and corresponding GO-FLOW charts are given. The proposed analytical method is applicable to loop structures that can be described by simultaneous Boolean equations, and it is very useful in evaluating the reliability of complex engineering systems.
A framework for intelligent reliability centered maintenance analysis
Cheng Zhonghua; Jia Xisheng; Gao Ping; Wu Su; Wang Jianzhao
2008-01-01
To improve the efficiency of reliability-centered maintenance (RCM) analysis, case-based reasoning (CBR), as a kind of artificial intelligence (AI) technology, was successfully introduced into RCM analysis process, and a framework for intelligent RCM analysis (IRCMA) was studied. The idea for IRCMA is based on the fact that the historical records of RCM analysis on similar items can be referenced and used for the current RCM analysis of a new item. Because many common or similar items may exist in the analyzed equipment, the repeated tasks of RCM analysis can be considerably simplified or avoided by revising the similar cases in conducting RCM analysis. Based on the previous theory studies, an intelligent RCM analysis system (IRCMAS) prototype was developed. This research has focused on the description of the definition, basic principles as well as a framework of IRCMA, and discussion of critical techniques in the IRCMA. Finally, IRCMAS prototype is presented based on a case study
2010-08-18
... airplanes. 1. Function and Reliability Testing. Flight tests: In place of 14 CFR 21.35(b)(2), the following...; Function and Reliability Testing AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special... to CAR part 3 became effective January 15, 1951, and deleted the service test requirements in Section...
Reliability test and failure analysis of high power LED packages
Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng
2011-01-01
A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 0 C/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing. (semiconductor devices)
Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory
Cacuci, D. G.; Cacuci, D. G.; Ionescu-Bujor, M.
2008-01-01
The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)
Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory
Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safety, D-76021 Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)
2008-07-01
The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)
A Review: Passive System Reliability Analysis – Accomplishments and Unresolved Issues
Nayak, Arun Kumar, E-mail: arunths@barc.gov.in [Reactor Engineering Division, Reactor Design and Development Group, Bhabha Atomic Research Centre, Mumbai (India); Chandrakar, Amit [Homi Bhabha National Institute, Mumbai (India); Vinod, Gopika [Reactor Safety Division, Reactor Design and Development Group, Bhabha Atomic Research Centre, Mumbai (India)
2014-10-10
Reliability assessment of passive safety systems is one of the important issues, since safety of advanced nuclear reactors rely on several passive features. In this context, a few methodologies such as reliability evaluation of passive safety system (REPAS), reliability methods for passive safety functions (RMPS), and analysis of passive systems reliability (APSRA) have been developed in the past. These methodologies have been used to assess reliability of various passive safety systems. While these methodologies have certain features in common, but they differ in considering certain issues; for example, treatment of model uncertainties, deviation of geometric, and process parameters from their nominal values. This paper presents the state of the art on passive system reliability assessment methodologies, the accomplishments, and remaining issues. In this review, three critical issues pertaining to passive systems performance and reliability have been identified. The first issue is applicability of best estimate codes and model uncertainty. The best estimate codes based phenomenological simulations of natural convection passive systems could have significant amount of uncertainties, these uncertainties must be incorporated in appropriate manner in the performance and reliability analysis of such systems. The second issue is the treatment of dynamic failure characteristics of components of passive systems. REPAS, RMPS, and APSRA methodologies do not consider dynamic failures of components or process, which may have strong influence on the failure of passive systems. The influence of dynamic failure characteristics of components on system failure probability is presented with the help of a dynamic reliability methodology based on Monte Carlo simulation. The analysis of a benchmark problem of Hold-up tank shows the error in failure probability estimation by not considering the dynamism of components. It is thus suggested that dynamic reliability methodologies must be
Oden, J Tinsley
2010-01-01
The textbook is designed to drive a crash course for beginning graduate students majoring in something besides mathematics, introducing mathematical foundations that lead to classical results in functional analysis. More specifically, Oden and Demkowicz want to prepare students to learn the variational theory of partial differential equations, distributions, and Sobolev spaces and numerical analysis with an emphasis on finite element methods. The 1996 first edition has been used in a rather intensive two-semester course. -Book News, June 2010
Hollo, E.
1985-08-01
Present Final Report summarizes results of R/D work done within IAEA-VEIKI (Institute for Electrical Power Research, Budapest, Hungary) Research Contract No. 3210 during 3 years' period of 01.08.1982 - 31.08.1985. Chapter 1 lists main research objectives of the project. Main results obtained are summarized in Chapters 2 and 3. Outcomes from development of failure modelling methodologies and their application for C/I components of WWER-440 units are as follows (Chapter 2): improvement of available ''failure mode and effect analysis'' methods and mini-fault tree structures usable for automatic disturbance (DAS) and reliability (RAS) analysis; general classification and determination of functional failure modes of WWER-440 NPP C/I components; set up of logic models for motor operated control valves and rod control/drive mechanism. Results of development of methods and their application for reliability modelling of NPP components and systems cover (Chapter 3): development of an algorithm (computer code COMPREL) for component-related failure and reliability parameter calculation; reliability analysis of PAKS II NPP diesel system; definition of functional requirements for reliability data bank (RDB) in WWER-440 units. Determination of RDB input/output data structure and data manipulation services. Methods used are a-priori failure mode and effect analysis, combined fault tree/event tree modelling technique, structural computer programming, probability theory application to nuclear field
Harro, Cathy C; Garascia, Chelsea
2018-01-10
Postural control declines with aging and is an independent risk factor for falls in older adults. Objective examination of balance function is warranted to direct fall prevention strategies. Force platform (FP) systems provide quantitative measures of postural control and analysis of different aspects of balance. The purpose of this study was to examine the reliability and validity of FP measures in healthy older adults. This study enrolled 46 healthy elderly adults, mean age 67.67 (5.1) years, who had no history of falls. They were assessed on 3 standardized tests on the NeuroCom Equitest FP system: limits of stability (LOS), motor control test (MCT), and sensory organization test (SOT). The test battery was administered twice within a 10-day period for test-retest reliability; intraclass correlation coefficients (ICCs), standard error of measurement (SEM), and minimal detectable change based on a 95% confidence interval (MDC95) were calculated. FP measures were compared with criterion clinical balance (Mini-BESTest and Functional Gait Assessment) and gait (10-m walk and 6-minute walk) measures to examine concurrent validity using Pearson correlation coefficients. Multiple linear regression analysis examined whether age and activity level were associated with FP performance. The α level was set at P point excursion measures all demonstrated excellent test-retest reliability (ICC = 0.90, 0.85, and 0.77, respectively), whereas moderate to good reliability was found for SOT vestibular ratio score (ICC = 0.71). There was large variability in performance in this healthy elderly cohort, resulting in relatively large MDC95 for these measures, especially for the LOS test. Fair correlations were found between LOS end point excursion and clinical balance and gait measures (r = 0.31-0.49), and between MCT average latency and gait measures only (r = -0.32). No correlations were found between SOT measures and clinical balance and gait measures. Age was only marginally
Reliability analysis of maintenance operations for railway tracks
Rhayma, N.; Bressolette, Ph.; Breul, P.; Fogli, M.; Saussine, G.
2013-01-01
Railway engineering is confronted with problems due to degradation of the railway network that requires important and costly maintenance work. However, because of the lack of knowledge on the geometrical and mechanical parameters of the track, it is difficult to optimize the maintenance management. In this context, this paper presents a new methodology to analyze the behavior of railway tracks. It combines new diagnostic devices which permit to obtain an important amount of data and thus to make statistics on the geometric and mechanical parameters and a non-intrusive stochastic approach which can be coupled with any mechanical model. Numerical results show the possibilities of this methodology for reliability analysis of different maintenance operations. In the future this approach will give important informations to railway managers to optimize maintenance operations using a reliability analysis
Yixiong Feng
2017-03-01
Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.
On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods
Gallegos, A. C.; Xie, J.; Suarez Salas, L.
2017-12-01
The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the
Reliability analysis for the creep rupture mode of failure
Vaidyanathan, S.
1975-01-01
An analytical study has been carried out to relate the factors of safety employed in the design of a component to the probability of failure in the thermal creep rupture mode. The analysis considers the statistical variations in the operating temperature, stress and rupture time, and applies the life fraction damage criterion as the indicator of failure. Typical results for solution annealed type 304-stainless steel material for the temperature and stress variations expected in an LMFBR environment have been obtained. The analytical problem was solved by considering the joint distribution of the independent variables and deriving the distribution for the function associated with the probability of failure by integrating over proper regions as dictated by the deterministic design rule. This leads to a triple integral for the final probability of failure where the coefficients of variation associated with the temperature, stress and rupture time distributions can be specified by the user. The derivation is general, and can be used for time varying stress histories and the case of irradiated material where the rupture time varies with accumulated fluence. Example calculations applied to solution annealed type 304 stainless steel material have been carried out for an assumed coefficient of variation of 2% for temperature and 6% for stress. The results show that the probability of failure associated with dependent stress intensity limits specified in the ASME Boiler and Pressure Vessel Section III Code Case 1592 is less than 5x10 -8 . Rupture under thermal creep conditions is a highly complicated phenomenon. It is believed that the present study will help in quantizing the reliability to be expected with deterministic design factors of safety
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the
Inclusion of task dependence in human reliability analysis
Su, Xiaoyan; Mahadevan, Sankaran; Xu, Peida; Deng, Yong
2014-01-01
Dependence assessment among human errors in human reliability analysis (HRA) is an important issue, which includes the evaluation of the dependence among human tasks and the effect of the dependence on the final human error probability (HEP). This paper represents a computational model to handle dependence in human reliability analysis. The aim of the study is to automatically provide conclusions on the overall degree of dependence and calculate the conditional human error probability (CHEP) once the judgments of the input factors are given. The dependence influencing factors are first identified by the experts and the priorities of these factors are also taken into consideration. Anchors and qualitative labels are provided as guidance for the HRA analyst's judgment of the input factors. The overall degree of dependence between human failure events is calculated based on the input values and the weights of the input factors. Finally, the CHEP is obtained according to a computing formula derived from the technique for human error rate prediction (THERP) method. The proposed method is able to quantify the subjective judgment from the experts and improve the transparency in the HEP evaluation process. Two examples are illustrated to show the effectiveness and the flexibility of the proposed method. - Highlights: • We propose a computational model to handle dependence in human reliability analysis. • The priorities of the dependence influencing factors are taken into consideration. • The overall dependence degree is determined by input judgments and the weights of factors. • The CHEP is obtained according to a computing formula derived from THERP
High-Reliable PLC RTOS Development and RPS Structure Analysis
Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H. [Enersys Co., Daejeon (Korea, Republic of)
2008-04-15
One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.
High-Reliable PLC RTOS Development and RPS Structure Analysis
Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H.
2008-04-01
One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.
Reliability analysis with linguistic data: An evidential network approach
Zhang, Xiaoge; Mahadevan, Sankaran; Deng, Xinyang
2017-01-01
In practical applications of reliability assessment of a system in-service, information about the condition of a system and its components is often available in text form, e.g., inspection reports. Estimation of the system reliability from such text-based records becomes a challenging problem. In this paper, we propose a four-step framework to deal with this problem. In the first step, we construct an evidential network with the consideration of available knowledge and data. Secondly, we train a Naive Bayes text classification algorithm based on the past records. By using the trained Naive Bayes algorithm to classify the new records, we build interval basic probability assignments (BPA) for each new record available in text form. Thirdly, we combine the interval BPAs of multiple new records using an evidence combination approach based on evidence theory. Finally, we propagate the interval BPA through the evidential network constructed earlier to obtain the system reliability. Two numerical examples are used to demonstrate the efficiency of the proposed method. We illustrate the effectiveness of the proposed method by comparing with Monte Carlo Simulation (MCS) results. - Highlights: • We model reliability analysis with linguistic data using evidential network. • Two examples are used to demonstrate the efficiency of the proposed method. • We compare the results with Monte Carlo Simulation (MCS).
Reliability engineering analysis of ATLAS data reprocessing campaigns
Vaniachine, A; Golubkov, D; Karpenko, D
2014-01-01
During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.
Development of the GO-FLOW reliability analysis methodology for nuclear reactor system
Matsuoka, Takeshi; Kobayashi, Michiyuki
1994-01-01
Probabilistic Safety Assessment (PSA) is important in the safety analysis of technological systems and processes, such as, nuclear plants, chemical and petroleum facilities, aerospace systems. Event trees and fault trees are the basic analytical tools that have been most frequently used for PSAs. Several system analysis methods can be used in addition to, or in support of, the event- and fault-tree analysis. The need for more advanced methods of system reliability analysis has grown with the increased complexity of engineered systems. The Ship Research Institute has been developing a new reliability analysis methodology, GO-FLOW, which is a success-oriented system analysis technique, and is capable of evaluating a large system with complex operational sequences. The research has been supported by the special research fund for Nuclear Technology, Science and Technology Agency, from 1989 to 1994. This paper describes the concept of the Probabilistic Safety Assessment (PSA), an overview of various system analysis techniques, an overview of the GO-FLOW methodology, the GO-FLOW analysis support system, procedure of treating a phased mission problem, a function of common cause failure analysis, a function of uncertainty analysis, a function of common cause failure analysis with uncertainty, and printing out system of the results of GO-FLOW analysis in the form of figure or table. Above functions are explained by analyzing sample systems, such as PWR AFWS, BWR ECCS. In the appendices, the structure of the GO-FLOW analysis programs and the meaning of the main variables defined in the GO-FLOW programs are described. The GO-FLOW methodology is a valuable and useful tool for system reliability analysis, and has a wide range of applications. With the development of the total system of the GO-FLOW, this methodology has became a powerful tool in a living PSA. (author) 54 refs
Reliability Analysis of Safety Grade PLC(POSAFE-Q) for Nuclear Power Plants
Kim, J. Y.; Lyou, J.; Lee, D. Y.; Choi, J. G.; Park, W. M.
2006-01-01
The Part Count Method of the military standard MILHDK- 217F has been used for the reliability prediction of the nuclear field. This handbook determines the Programmable Logic Controller (PLC) failure rate by summing the failure rates of the individual component included in the PLC. Normally it is easily predictable that the components added for the fault detection improve the reliability of the PLC. But the application of this handbook is estimated with poor reliability because of the increased component number for the fault detection. To compensate this discrepancy, the quantitative reliability analysis method is suggested using the functional separation model in this paper. And it is applied to the Reactor Protection System (RPS) being developed in Korea to identify any design weak points from a safety point of view
Some developments in human reliability analysis approaches and tools
Hannaman, G W; Worledge, D H
1988-01-01
Since human actions have been recognized as an important contributor to safety of operating plants in most industries, research has been performed to better understand and account for the way operators interact during accidents through the control room and equipment interface. This paper describes the integration of a series of research projects sponsored by the Electric Power Research Institute to strengthen the methods for performing the human reliability analysis portion of the probabilistic safety studies. It focuses on the analytical framework used to guide the analysis, the development of the models for quantifying time-dependent actions, and simulator experiments used to validate the models.
ANALYSIS OF AVAILABILITY AND RELIABILITY IN RHIC OPERATIONS
PILAT, F.; INGRASSIA, P.; MICHNOFF, R.
2006-01-01
RHIC has been successfully operated for 5 years as a collider for different species, ranging from heavy ions including gold and copper, to polarized protons. We present a critical analysis of reliability data for RHIC that not only identifies the principal factors limiting availability but also evaluates critical choices at design times and assess their impact on present machine performance. RHIC availability data are typical when compared to similar high-energy colliders. The critical analysis of operations data is the basis for studies and plans to improve RHIC machine availability beyond the 50-60% typical of high-energy colliders
A comparative reliability analysis of free-piston Stirling machines
Schreiber, Jeffrey G.
2001-02-01
A free-piston Stirling power convertor is being developed for use in an advanced radioisotope power system to provide electric power for NASA deep space missions. These missions are typically long lived, lasting for up to 14 years. The Department of Energy (DOE) is responsible for providing the radioisotope power system for the NASA missions, and has managed the development of the free-piston power convertor for this application. The NASA Glenn Research Center has been involved in the development of Stirling power conversion technology for over 25 years and is currently providing support to DOE. Due to the nature of the potential missions, long life and high reliability are important features for the power system. Substantial resources have been spent on the development of long life Stirling cryocoolers for space applications. As a very general statement, free-piston Stirling power convertors have many features in common with free-piston Stirling cryocoolers, however there are also significant differences. For example, designs exist for both power convertors and cryocoolers that use the flexure bearing support system to provide noncontacting operation of the close-clearance moving parts. This technology and the operating experience derived from one application may be readily applied to the other application. This similarity does not pertain in the case of outgassing and contamination. In the cryocooler, the contaminants normally condense in the critical heat exchangers and foul the performance. In the Stirling power convertor just the opposite is true as contaminants condense on non-critical surfaces. A methodology was recently published that provides a relative comparison of reliability, and is applicable to systems. The methodology has been applied to compare the reliability of a Stirling cryocooler relative to that of a free-piston Stirling power convertor. The reliability analysis indicates that the power convertor should be able to have superior reliability
Modeling of seismic hazards for dynamic reliability analysis
Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.
1993-01-01
This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)
Interrater reliability of videotaped observational gait-analysis assessments.
Eastlack, M E; Arvidson, J; Snyder-Mackler, L; Danoff, J V; McGarvey, C L
1991-06-01
The purpose of this study was to determine the interrater reliability of videotaped observational gait-analysis (VOGA) assessments. Fifty-four licensed physical therapists with varying amounts of clinical experience served as raters. Three patients with rheumatoid arthritis who demonstrated an abnormal gait pattern served as subjects for the videotape. The raters analyzed each patient's most severely involved knee during the four subphases of stance for the kinematic variables of knee flexion and genu valgum. Raters were asked to determine whether these variables were inadequate, normal, or excessive. The temporospatial variables analyzed throughout the entire gait cycle were cadence, step length, stride length, stance time, and step width. Generalized kappa coefficients ranged from .11 to .52. Intraclass correlation coefficients (2,1) and (3,1) were slightly higher. Our results indicate that physical therapists' VOGA assessments are only slightly to moderately reliable and that improved interrater reliability of the assessments of physical therapists utilizing this technique is needed. Our data suggest that there is a need for greater standardization of gait-analysis training.
Modeling and Analysis of Component Faults and Reliability
Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter
2016-01-01
This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....
CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES
Szatmary, S. A.
1994-01-01
The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided
Damage tolerance reliability analysis of automotive spot-welded joints
Mahadevan, Sankaran; Ni Kan
2003-01-01
This paper develops a damage tolerance reliability analysis methodology for automotive spot-welded joints under multi-axial and variable amplitude loading history. The total fatigue life of a spot weld is divided into two parts, crack initiation and crack propagation. The multi-axial loading history is obtained from transient response finite element analysis of a vehicle model. A three-dimensional finite element model of a simplified joint with four spot welds is developed for static stress/strain analysis. A probabilistic Miner's rule is combined with a randomized strain-life curve family and the stress/strain analysis result to develop a strain-based probabilistic fatigue crack initiation life prediction for spot welds. Afterwards, the fatigue crack inside the base material sheet is modeled as a surface crack. Then a probabilistic crack growth model is combined with the stress analysis result to develop a probabilistic fatigue crack growth life prediction for spot welds. Both methods are implemented with MSC/NASTRAN and MSC/FATIGUE software, and are useful for reliability assessment of automotive spot-welded joints against fatigue and fracture
Bhatia, Rajendra
2009-01-01
These notes are a record of a one semester course on Functional Analysis given by the author to second year Master of Statistics students at the Indian Statistical Institute, New Delhi. Students taking this course have a strong background in real analysis, linear algebra, measure theory and probability, and the course proceeds rapidly from the definition of a normed linear space to the spectral theorem for bounded selfadjoint operators in a Hilbert space. The book is organised as twenty six lectures, each corresponding to a ninety minute class session. This may be helpful to teachers planning a course on this topic. Well prepared students can read it on their own.
Reliability analysis of offshore structures using OMA based fatigue stresses
Silva Nabuco, Bruna; Aissani, Amina; Glindtvad Tarpø, Marius
2017-01-01
focus is on the uncertainty observed on the different stresses used to predict the damage. This uncertainty can be reduced by Modal Based Fatigue Monitoring which is a technique based on continuously measuring of the accelerations in few points of the structure with the use of accelerometers known...... points of the structure, the stress history can be calculated in any arbitrary point of the structure. The accuracy of the estimated actual stress is analyzed by experimental tests on a scale model where the obtained stresses are compared to strain gauges measurements. After evaluating the fatigue...... stresses directly from the operational response of the structure, a reliability analysis is performed in order to estimate the reliability of using Modal Based Fatigue Monitoring for long term fatigue studies....
Reliability analysis of neutron flux monitoring system for PFBR
Rajesh, M.G.; Bhatnagar, P.V.; Das, D.; Pithawa, C.K.; Vinod, Gopika; Rao, V.V.S.S.
2010-01-01
The Neutron Flux Monitoring System (NFMS) measures reactor power, rate of change of power and reactivity changes in the core in all states of operation and shutdown. The system consists of instrument channels that are designed and built to have high reliability. All channels are required to have a Mean Time Between Failures (MTBF) of 150000 hours minimum. Failure Mode and Effects Analysis (FMEA) and failure rate estimation of NFMS channels has been carried out. FMEA is carried out in compliance with MIL-STD-338B. Reliability estimation of the channels is done according to MIL-HDBK-217FN2. Paper discusses the methodology followed for FMEA and failure rate estimation of two safety channels and results. (author)
Issues in benchmarking human reliability analysis methods: A literature review
Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia
2010-01-01
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.
Photovoltaic module reliability improvement through application testing and failure analysis
Dumas, L. N.; Shumka, A.
1982-01-01
During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.
Mechanical Properties for Reliability Analysis of Structures in Glassy Carbon
Garion, Cédric
2014-01-01
Despite its good physical properties, the glassy carbon material is not widely used, especially for structural applications. Nevertheless, its transparency to particles and temperature resistance are interesting properties for the applications to vacuum chambers and components in high energy physics. For example, it has been proposed for fast shutter valve in particle accelerator [1] [2]. The mechanical properties have to be carefully determined to assess the reliability of structures in such a material. In this paper, mechanical tests have been carried out to determine the elastic parameters, the strength and toughness on commercial grades. A statistical approach, based on the Weibull’s distribution, is used to characterize the material both in tension and compression. The results are compared to the literature and the difference of properties for these two loading cases is shown. Based on a Finite Element analysis, a statistical approach is applied to define the reliability of a structural component in gl...
Issues in benchmarking human reliability analysis methods : a literature review.
Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)
2008-04-01
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.
Review of the treat upgrade reactor scram system reliability analysis
Montague, D.F.; Fussell, J.B.; Krois, P.A.; Morelock, T.C.; Knee, H.E.; Manning, J.J.; Haas, P.M.; West, K.W.
1984-10-01
In order to resolve some key LMFBR safety issues, ANL personnel are modifying the TREAT reactor to handle much larger experiments. As a result of these modifications, the upgraded Treat reactor will not always operate in a self-limited mode. During certain experiments in the upgraded TREAT reactor, it is possible that the fuel could be damaged by overheating if, once the computer systems fail, the reactor scram system (RSS) fails on demand. To help ensure that the upgraded TREAT reactor is shut down when required, ANL personnel have designed a triply redundant RSS for the facility. The RSS is designed to meet three reliability goals: (1) a loss of capability failure probability of 10 -9 /demand (independent failures only); (2) an inadvertent shutdown probability of 10 -3 /experiment; and (3) protection agaist any known potential common cause failures. According to ANL's reliability analysis of the RSS, this system substantially meets these goals
Obtaining reliable Likelihood Ratio tests from simulated likelihood functions
Andersen, Laura Mørch
It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...
Deimling, Klaus
1985-01-01
topics. However, only a modest preliminary knowledge is needed. In the first chapter, where we introduce an important topological concept, the so-called topological degree for continuous maps from subsets ofRn into Rn, you need not know anything about functional analysis. Starting with Chapter 2, where infinite dimensions first appear, one should be familiar with the essential step of consider ing a sequence or a function of some sort as a point in the corresponding vector space of all such sequences or functions, whenever this abstraction is worthwhile. One should also work out the things which are proved in § 7 and accept certain basic principles of linear functional analysis quoted there for easier references, until they are applied in later chapters. In other words, even the 'completely linear' sections which we have included for your convenience serve only as a vehicle for progress in nonlinearity. Another point that makes the text introductory is the use of an essentially uniform mathematical languag...
Reliability analysis of a complex standby redundant systems
Subramanian, R.; Anantharaman, V.
1995-01-01
In any redundant system, the state of the standby unit is usually taken to be hot, warm or cold. In this paper, we present a new model of a two unit standby system wherein the standby unit is put in cold state for a certain amount of time before it is allowed to become warm. Upon failure of the online unit, the standby unit, if in warm state, instantaneously starts operating online; if it is in cold state, an emergency switching is made which takes it to warm state (and hence online) either instantaneously or non-instantaneously--each with some probability; if it is under repair, the system breaks down. Assuming all the associated distributions to be general except that of the life time of the standby unit in the warm state, various reliability characteristics that are of interest to reliability engineers and system designers are derived. A comprehensive cost function is also constructed and is then optimized with respect to three different control parameters numerically. In addition numerical results are presented to illustrate the behaviour of the various reliability characteristics derived
Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin; Wang, Shuang
2015-01-01
To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.
Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)
2015-08-15
To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.
Reliability of candida skin test in the evaluation of T-cell function in ...
Background: Both standardized and non-standardized candida skin tests are used in clinical practice for functional in-vivo assessment of cellular immunity with variable results and are considered not reliable under the age of 1 year. We sought to investigate the reliability of using manually prepared candida intradermal test ...
2010-05-28
... SF50 airplanes. 1. Function and Reliability Testing Flight tests: In place of 14 CFR part 21.35(b)(2... Reliability Testing AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice of proposed special..., 1951, and deleted the service test requirements in Section 3.19 for airplanes of 6,000 pounds maximum...
Reliability analysis of component-level redundant topologies for solid-state fault current limiter
Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam
2018-04-01
Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.
Decision theory, the context for risk and reliability analysis
Kaplan, S.
1985-01-01
According to this model of the decision process then, the optimum decision is that option having the largest expected utility. This is the fundamental model of a decision situation. It is necessary to remark that in order for the model to represent a real-life decision situation, it must include all the options present in that situation, including, for example, the option of not deciding--which is itself a decision, although usually not the optimum one. Similarly, it should include the option of delaying the decision while the authors gather further information. Both of these options have probabilities, outcomes, impacts, and utilities like any option and should be included explicitly in the decision diagram. The reason for doing a quantitative risk or reliability analysis is always that, somewhere underlying there is a decision to be made. The decision analysis therefore always forms the context for the risk or reliability analysis, and this context shapes the form and language of that analysis. Therefore, they give in this section a brief review of the well-known decision theory diagram
Reliability analysis based on a novel density estimation method for structures with correlations
Baoyu LI
2017-06-01
Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.
Jung, Won Dea; Kim, Jae Whan; Park, Jin Kyun; Ha, Jae Joo [Korea Atomic Energy Research Institute, Taejeon (Korea)
2000-02-01
More than twenty HRA (Human Reliability Analysis) methodologies have been developed and used for the safety analysis in nuclear field during the past two decades. However, no methodology appears to have universally been accepted, as various limitations have been raised for more widely used ones. One of the most important limitations of conventional HRA is insufficient analysis of the task structure and problem space. To resolve this problem, we suggest SIA (Structured Information Analysis) for HRA. The proposed SIA consists of three parts. The first part is the scenario analysis that investigates the contextual information related to the given task on the basis of selected scenarios. The second is the goals-means analysis to define the relations between the cognitive goal and task steps. The third is the cognitive function analysis module that identifies the cognitive patterns and information flows involved in the task. Through the three-part analysis, systematic investigation is made possible from the macroscopic information on the tasks to the microscopic information on the specific cognitive processes. It is expected that analysts can attain a structured set of information that helps to predict the types and possibility of human error in the given task. 48 refs., 12 figs., 11 tabs. (Author)
A Research Roadmap for Computation-Based Human Reliability Analysis
Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.
A Research Roadmap for Computation-Based Human Reliability Analysis
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina
2015-01-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.
Limitations in simulator time-based human reliability analysis methods
Wreathall, J.
1989-01-01
Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical
The DYLAM approach for the dynamic reliability analysis of systems
Cojazzi, Giacomo
1996-01-01
In many real systems, failures occurring to the components, control failures and human interventions often interact with the physical system evolution in such a way that a simple reliability analysis, de-coupled from process dynamics, is very difficult or even impossible. In the last ten years many dynamic reliability approaches have been proposed to properly assess the reliability of these systems characterized by dynamic interactions. The DYLAM methodology, now implemented in its latest version, DYLAM-3, offers a powerful tool for integrating deterministic and failure events. This paper describes the main features of the DYLAM-3 code with reference to the classic fault-tree and event-tree techniques. Some aspects connected to the practical problems underlying dynamic event-trees are also discussed. A simple system, already analyzed with other dynamic methods is used as a reference for the numerical applications. The same system is also studied with a time-dependent fault-tree approach in order to show some features of dynamic methods vs classical techniques. Examples including stochastic failures, without and with repair, failures on demand and time dependent failure rates give an extensive overview of DYLAM-3 capabilities
Fifty Years of THERP and Human Reliability Analysis
Ronald L. Boring
2012-06-01
In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø National Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.
Russell, K.D.; Sattison, M.B.; Rasmuson, D.M.
1989-01-01
The Integrated Reliability and Risk Analysis System (IRRAS) is an integrated PRA software tool that gives the user the ability to create and analyze fault trees and accident sequences using an IBM-compatible microcomputer. This program provides functions that range from graphical fault tree and event tree construction to cut set generation and quantification. IRRAS contains all the capabilities and functions required to create, modify, reduce, and analyze event tree and fault tree models used in the analysis of complex systems and processes. IRRAS uses advanced graphic and analytical techniques to achieve the greatest possible realization of the potential of the microcomputer. When the needs of the user exceed this potential, IRRAS can call upon the power of the mainframe computer. The role of the Idaho National Engineering Laboratory if the IRRAS program is that of software developer and interface to the user community. Version 1.0 of the IRRAS program was released in February 1987 to prove the concept of performing this kind of analysis on microcomputers. This version contained many of the basic features needed for fault tree analysis and was received very well by the PRA community. Since the release of Version 1.0, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version is designated ''IRRAS 2.0''. Version 3.0 will contain all of the features required for efficient event tree and fault tree construction and analysis. 5 refs., 26 figs
Reliability and Robustness Analysis of the Masinga Dam under Uncertainty
Hayden Postle-Floyd
2017-02-01
Full Text Available Kenya’s water abstraction must meet the projected growth in municipal and irrigation demand by the end of 2030 in order to achieve the country’s industrial and economic development plan. The Masinga dam, on the Tana River, is the key to meeting this goal to satisfy the growing demands whilst also continuing to provide hydroelectric power generation. This study quantitatively assesses the reliability and robustness of the Masinga dam system under uncertain future supply and demand using probabilistic climate and population projections, and examines how long-term planning may improve the longevity of the dam. River flow and demand projections are used alongside each other as inputs to the dam system simulation model linked to an optimisation engine to maximise water availability. Water availability after demand satisfaction is assessed for future years, and the projected reliability of the system is calculated for selected years. The analysis shows that maximising power generation on a short-term year-by-year basis achieves 80%, 50% and 1% reliability by 2020, 2025 and 2030 onwards, respectively. Longer term optimal planning, however, has increased system reliability to up to 95% in 2020, 80% in 2025, and more than 40% in 2030 onwards. In addition, increasing the capacity of the reservoir by around 25% can significantly improve the robustness of the system for all future time periods. This study provides a platform for analysing the implication of different planning and management of Masinga dam and suggests that careful consideration should be given to account for growing municipal needs and irrigation schemes in both the immediate and the associated Tana River basin.
An advanced human reliability analysis methodology: analysis of cognitive errors focused on
Kim, J. H.; Jeong, W. D.
2001-01-01
The conventional Human Reliability Analysis (HRA) methods such as THERP/ASEP, HCR and SLIM has been criticised for their deficiency in analysing cognitive errors which occurs during operator's decision making process. In order to supplement the limitation of the conventional methods, an advanced HRA method, what is called the 2 nd generation HRA method, including both qualitative analysis and quantitative assessment of cognitive errors has been being developed based on the state-of-the-art theory of cognitive systems engineering and error psychology. The method was developed on the basis of human decision-making model and the relation between the cognitive function and the performance influencing factors. The application of the proposed method to two emergency operation tasks is presented
Structural system reliability calculation using a probabilistic fault tree analysis method
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Reliability analysis of structures under periodic proof tests in service
Yang, J.-N.
1976-01-01
A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.
Creation and Reliability Analysis of Vehicle Dynamic Weighing Model
Zhi-Ling XU
2014-08-01
Full Text Available In this paper, it is modeled by using ADAMS to portable axle load meter of dynamic weighing system, controlling a single variable simulation weighing process, getting the simulation weighing data under the different speed and weight; simultaneously using portable weighing system with the same parameters to achieve the actual measurement, comparative analysis the simulation results under the same conditions, at 30 km/h or less, the simulation value and the measured value do not differ by more than 5 %, it is not only to verify the reliability of dynamic weighing model, but also to create possible for improving algorithm study efficiency by using dynamic weighing model simulation.
Human Performance Modeling for Dynamic Human Reliability Analysis
Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory
2015-08-01
Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.
IDHEAS – A NEW APPROACH FOR HUMAN RELIABILITY ANALYSIS
G. W. Parry; J.A Forester; V.N. Dang; S. M. L. Hendrickson; M. Presley; E. Lois; J. Xing
2013-09-01
This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure event (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.
Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report
Authen, S.; Larsson, J.; Bjoerkman, K.; Holmberg, J.-E.
2010-12-01
Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)
Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report
Authen, S.; Larsson, J. (Risk Pilot AB, Stockholm (Sweden)); Bjoerkman, K.; Holmberg, J.-E. (VTT, Helsingfors (Finland))
2010-12-15
Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)
Ирина Александровна Гавриленко
2016-02-01
Full Text Available The approach to automated management of load flow in engineering networks considering functional reliability was proposed in the article. The improvement of the concept of operational and strategic management of load flow in engineering networks was considered. The verbal statement of the problem for thesis research is defined, namely, the problem of development of information technology for exact calculation of the functional reliability of the network, or the risk of short delivery of purpose-oriented product for consumers
Sanjay Kumar Singh
2011-06-01
Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.
Spiering, Barry A.; Lee, Stuart M. C.; Mulavara, Ajitkumar P.; Bentley, Jason, R.; Buxton, Roxanne E.; Lawrence, Emily L.; Sinka, Joseph; Guilliams, Mark E.; Ploutz-Snyder, Lori L.; Bloomberg, Jacob J.
2010-01-01
Spaceflight affects nearly every physiological system. Spaceflight-induced alterations in physiological function translate to decrements in functional performance. Purpose: To develop a test battery for quickly and safely assessing diverse indices of neuromuscular performance. I. Quickly: Battery of tests can be completed in approx.30-40 min. II. Safely: a) No eccentric muscle actions or impact forces. b) Tests present little challenge to postural stability. III. Diverse indices: a) Strength: Excellent reliability (ICC = 0.99) b) Central activation: Very good reliability (ICC = 0.87) c) Power: Excellent reliability (ICC = 0.99) d) Endurance: Total work has excellent reliability (ICC = 0.99) e) Force steadiness: Poor reliability (ICC = 0.20 - 0.60) National
Reliability analysis and prediction of mixed mode load using Markov Chain Model
Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.
2014-01-01
The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
The reliability of WorkWell Systems Functional Capacity Evaluation: a systematic review
2014-01-01
Background Functional capacity evaluation (FCE) determines a person’s ability to perform work-related tasks and is a major component of the rehabilitation process. The WorkWell Systems (WWS) FCE (formerly known as Isernhagen Work Systems FCE) is currently the most commonly used FCE tool in German rehabilitation centres. Our systematic review investigated the inter-rater, intra-rater and test-retest reliability of the WWS FCE. Methods We performed a systematic literature search of studies on the reliability of the WWS FCE and extracted item-specific measures of inter-rater, intra-rater and test-retest reliability from the identified studies. Intraclass correlation coefficients ≥ 0.75, percentages of agreement ≥ 80%, and kappa coefficients ≥ 0.60 were categorised as acceptable, otherwise they were considered non-acceptable. The extracted values were summarised for the five performance categories of the WWS FCE, and the results were classified as either consistent or inconsistent. Results From 11 identified studies, 150 item-specific reliability measures were extracted. 89% of the extracted inter-rater reliability measures, all of the intra-rater reliability measures and 96% of the test-retest reliability measures of the weight handling and strength tests had an acceptable level of reliability, compared to only 67% of the test-retest reliability measures of the posture/mobility tests and 56% of the test-retest reliability measures of the locomotion tests. Both of the extracted test-retest reliability measures of the balance test were acceptable. Conclusions Weight handling and strength tests were found to have consistently acceptable reliability. Further research is needed to explore the reliability of the other tests as inconsistent findings or a lack of data prevented definitive conclusions. PMID:24674029
An Intelligent Method for Structural Reliability Analysis Based on Response Surface
桂劲松; 刘红; 康海贵
2004-01-01
As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.
Tailoring a Human Reliability Analysis to Your Industry Needs
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed
Inclusion of fatigue effects in human reliability analysis
Griffith, Candice D. [Vanderbilt University, Nashville, TN (United States); Mahadevan, Sankaran, E-mail: sankaran.mahadevan@vanderbilt.edu [Vanderbilt University, Nashville, TN (United States)
2011-11-15
The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: >We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. > We discuss the difficulties in defining and measuring fatigue. > We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.
Inclusion of fatigue effects in human reliability analysis
Griffith, Candice D.; Mahadevan, Sankaran
2011-01-01
The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: →We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. → We discuss the difficulties in defining and measuring fatigue. → We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.
Dulce Romero-Ayuso
2018-03-01
Full Text Available The aim of this study was to determine the psychometric properties of the “Assessment of Sensory Processing and Executive Functions in Childhood” (EPYFEI, a questionnaire designed to assess the sensory processing and executive functions of children aged between 3 and 11 years. The EPYFEI was completed by a sample of 1,732 parents of children aged between 3 and 11 years who lived in Spain. An exploratory factor analysis was conducted and showed five main factors: (1 executive attention, working memory, and initiation of actions; (2 general sensory processing; (3 emotional and behavioral self-regulation; (4 supervision, correction of actions, and problem solving; and (5 inhibitory. The reliability of the analysis was high both for the whole questionnaire and for the factors it is composed of. Results provide evidence of the potential usefulness of the EPYFEI in clinical contexts for the early detection of neurodevelopmental disorders, in which there may be a deficit of executive functions and sensory processing.
Analysis of dependent failures in risk assessment and reliability evaluation
Fleming, K.N.; Mosleh, A.; Kelley, A.P. Jr.; Gas-Cooled Reactors Associates, La Jolla, CA)
1983-01-01
The ability to estimate the risk of potential reactor accidents is largely determined by the ability to analyze statistically dependent multiple failures. The importance of dependent failures has been indicated in recent probabilistic risk assessment (PRA) studies as well as in reports of reactor operating experiences. This article highlights the importance of several different types of dependent failures from the perspective of the risk and reliability analyst and provides references to the methods and data available for their analysis. In addition to describing the current state of the art, some recent advances, pitfalls, misconceptions, and limitations of some approaches to dependent failure analysis are addressed. A summary is included of the discourse on this subject, which is presented in the Institute of Electrical and Electronics Engineers/American Nuclear Society PRA Procedures Guide
Model-based human reliability analysis: prospects and requirements
Mosleh, A.; Chang, Y.H.
2004-01-01
Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Kang, D. I.; Jung, W. D.
2002-01-01
This paper introduces the background and development activities of domestic standardization of procedure and method for Human Reliability Analysis (HRA) to avoid the intervention of subjectivity by HRA analyst in Probabilistic Safety Assessment (PSA) as possible, and the review of the HRA results for domestic nuclear power plants under design studied by Korea Atomic Energy Research Institute. We identify the HRA methods used for PSA for domestic NPPs and discuss the subjectivity of HRA analyst shown in performing a HRA. Also, we introduce the PSA guidelines published in USA and review the HRA results based on them. We propose the system of a standard procedure and method for HRA to be developed
Paap, Kenneth R; Sawi, Oliver
2016-12-01
Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.
Locks, M.O.
1978-01-01
SPARCS-2 (Simulation Program for Assessing the Reliabilities of Complex Systems, Version 2) is a PL/1 computer program for assessing (establishing interval estimates for) the reliability and the MTBF of a large and complex s-coherent system of any modular configuration. The system can consist of a complex logical assembly of independently failing attribute (binomial-Bernoulli) and time-to-failure (Poisson-exponential) components, without regard to their placement. Alternatively, it can be a configuration of independently failing modules, where each module has either or both attribute and time-to-failure components. SPARCS-2 also has an improved super modularity feature. Modules with minimal-cut unreliabiliy calculations can be mixed with those having minimal-path reliability calculations. All output has been standardized to system reliability or probability of success, regardless of the form in which the input data is presented, and whatever the configuration of modules or elements within modules
Test-retest and interrater reliability of the functional lower extremity evaluation.
Haitz, Karyn; Shultz, Rebecca; Hodgins, Melissa; Matheson, Gordon O
2014-12-01
Repeated-measures clinical measurement reliability study. To establish the reliability and face validity of the Functional Lower Extremity Evaluation (FLEE). The FLEE is a 45-minute battery of 8 standardized functional performance tests that measures 3 components of lower extremity function: control, power, and endurance. The reliability and normative values for the FLEE in healthy athletes are unknown. A face validity survey for the FLEE was sent to sports medicine personnel to evaluate the level of importance and frequency of clinical usage of each test included in the FLEE. The FLEE was then administered and rated for 40 uninjured athletes. To assess test-retest reliability, each athlete was tested twice, 1 week apart, by the same rater. To assess interrater reliability, 3 raters scored each athlete during 1 of the testing sessions. Intraclass correlation coefficients were used to assess the test-retest and interrater reliability of each of the FLEE tests. In the face validity survey, the FLEE tests were rated as highly important by 58% to 71% of respondents but frequently used by only 26% to 45% of respondents. Interrater reliability intraclass correlation coefficients ranged from 0.83 to 1.00, and test-retest reliability ranged from 0.71 to 0.95. The FLEE tests are considered clinically important for assessing lower extremity function by sports medicine personnel but are underused. The FLEE also is a reliable assessment tool. Future studies are required to determine if use of the FLEE to make return-to-play decisions may reduce reinjury rates.
Reliability And Maintenance Analysis Of CCTV Systems Used In Rail Transport
Siergiejczyk Mirosław; Paś Jacek; Rosiński Adam
2015-01-01
CCTV systems are widely used across plethora of industrial areas including transport, where their function is to support transport telematics systems. Among others, they are used to ensure travel safety. This paper presented a reliability and maintenance analysis of CCTV. It led to building a relationships graph and then Chapman–Kolmogorov system of equations was derived to describe it. Drawing on those equations, relationships for calculating probability of system staying in state of full ab...
Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz [Air Force Institute of Technology ul. Księcia Bolesława 6 01-494 Warsaw (Poland)
2016-06-08
The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.
Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz
2016-01-01
The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.
Forward lunge as a functional performance test in ACL deficient subjects: test-retest reliability
Alkjaer, Tine; Henriksen, Marius; Dyhre-Poulsen, Poul
2009-01-01
The forward lunge movement may be used as a functional performance test of anterior cruciate ligament (ACL) deficient and reconstructed subjects. The purposes were 1) to determine the test-retest reliability of a forward lunge in healthy subjects and 2) to determine the required numbers...... of repetitions necessary to yield satisfactory reliability. Nineteen healthy subjects performed four trials of a forward lunge on two different days. The movement time, impulses of the ground reaction forces (IFz, IFy), knee joint kinematics and dynamics during the forward lunge were calculated. The relative...... reliability was determined by calculation of Intraclass Correlation Coefficients (ICC). The IFz, IFy and the positive work of the knee extensors showed excellent reliability (ICC >0.75). All other variables demonstrated acceptable reliability (0.4>ICCreliability increased when more than...
Kalaba Dragan V.
2014-01-01
Full Text Available The main subject of this paper is the representation of the probabilistic technique for thermal power system reliability assessment. Exploitation research of the reliability of the fossil fuel power plant system has defined the function, or the probabilistic law, according to which the random variable behaves (occurrence of complete unplanned standstill. Based on these data, and by applying the reliability theory to this particular system, using simple and complex Weibull distribution, a hypothesis has been confirmed that the distribution of the observed random variable fully describes the behaviour of such a system in terms of reliability. Establishing a comprehensive insight in the field of probabilistic power system reliability assessment technique could serve as an input for further research and development in the area of power system planning and operation.
eobe
Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...
Reliability analysis of large scaled structures by optimization technique
Ishikawa, N.; Mihara, T.; Iizuka, M.
1987-01-01
This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)
Application of human reliability analysis methodology of second generation
Ruiz S, T. de J.; Nelson E, P. F.
2009-10-01
The human reliability analysis (HRA) is a very important part of probabilistic safety analysis. The main contribution of HRA in nuclear power plants is the identification and characterization of the issues that are brought together for an error occurring in the human tasks that occur under normal operation conditions and those made after abnormal event. Additionally, the analysis of various accidents in history, it was found that the human component has been a contributing factor in the cause. Because of need to understand the forms and probability of human error in the 60 decade begins with the collection of generic data that result in the development of the first generation of HRA methodologies. Subsequently develop methods to include in their models additional performance shaping factors and the interaction between them. So by the 90 mid, comes what is considered the second generation methodologies. Among these is the methodology A Technique for Human Event Analysis (ATHEANA). The application of this method in a generic human failure event, it is interesting because it includes in its modeling commission error, the additional deviations quantification to nominal scenario considered in the accident sequence of probabilistic safety analysis and, for this event the dependency actions evaluation. That is, the generic human failure event was required first independent evaluation of the two related human failure events . So the gathering of the new human error probabilities involves the nominal scenario quantification and cases of significant deviations considered by the potential impact on analyzed human failure events. Like probabilistic safety analysis, with the analysis of the sequences were extracted factors more specific with the highest contribution in the human error probabilities. (Author)
Analogical reasoning for reliability analysis based on generic data
Kozin, Igor O
1996-10-01
The paper suggests using the systemic concept 'analogy' for the foundation of an approach to analyze system reliability on the basis of generic data, describing the method of structuring the set that defines analogical models, an approach of transition from the analogical model to a reliability model and a way of obtaining reliability intervals of analogous objects.
Analogical reasoning for reliability analysis based on generic data
Kozin, Igor O.
1996-01-01
The paper suggests using the systemic concept 'analogy' for the foundation of an approach to analyze system reliability on the basis of generic data, describing the method of structuring the set that defines analogical models, an approach of transition from the analogical model to a reliability model and a way of obtaining reliability intervals of analogous objects
Applicability of simplified human reliability analysis methods for severe accidents
Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)
2016-03-15
Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)
Fatigue Reliability Analysis of Wind Turbine Cast Components
Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren
2017-01-01
.) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...
Basic aspects of stochastic reliability analysis for redundancy systems
Doerre, P.
1989-01-01
Much confusion has been created by trying to establish common cause failure (CCF) as an extra phenomenon which has to be treated with extra methods in reliability and data analysis. This paper takes another approach which can be roughly described by the statement that dependent failure is the basic phenomenon, while 'independent failure' refers to a special limiting case, namely the perfectly homogeneous population. This approach is motivated by examples demonstrating that common causes do not lead to dependent failure, so far as physical dependencies like shared components are excluded, and that stochastic dependencies are not related to common causes. The possibility to select more than one failure behaviour from an inhomogeneous population is identified as an additional random process which creates stochastic dependence. However, this source of randomness is usually treated in the deterministic limit, which destroys dependence and hence yields incorrect multiple failure frequencies for redundancy structures, thus creating the need for applying corrective CCF models. (author)
An Application of Graph Theory in Markov Chains Reliability Analysis
Pavel Skalny
2014-01-01
Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.
Time-dependent reliability analysis and condition assessment of structures
Ellingwood, B.R.
1997-01-01
Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process
Reliability analysis for the quench detection in the LHC machine
Denz, R; Vergara-Fernández, A
2002-01-01
The Large Hadron Collider (LHC) will incorporate a large amount of superconducting elements that require protection in case of a quench. Key elements in the quench protection system are the electronic quench detectors. Their reliability will have an important impact on the down time as well as on the operational cost of the collider. The expected rates of both false and missed quenches have been computed for several redundant detection schemes. The developed model takes account of the maintainability of the system to optimise the frequency of foreseen checks, and evaluate their influence on the performance of different detection topologies. Seen the uncertainty of the failure rate of the components combined with the LHC tunnel environment, the study has been completed with a sensitivity analysis of the results. The chosen detection scheme and the maintainability strategy for each detector family are given.
A Primer on Functional Analysis
Yoman, Jerome
2008-01-01
This article presents principles and basic steps for practitioners to complete a functional analysis of client behavior. The emphasis is on application of functional analysis to adult mental health clients. The article includes a detailed flow chart containing all major functional diagnoses and behavioral interventions, with functional assessment…
Clayson, Peter E; Miller, Gregory A
2017-01-01
Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.
Mokhtarinia, Hamid Reza; Hosseini, Azadeh; Maleki-Ghahfarokhi, Azam; Gabel, Charles Philip; Zohrabi, Majid
2018-05-15
There are various instruments and methods to evaluate spinal health and functional status. Whole-spine patient reported outcome (PRO) measures, such as the Spine Functional Index (SFI), assess the spine from the cervical to lumbo-sacral sections as a single kinetic chain. The aim of this study was to cross-culturally adapt the SFI for Persian speaking patients (SFI-Pr) and determine the psychometric properties of reliability and validity (convergent and construct) in a Persian patient population. The SFI (English) PRO was translated into Persian according to published guidelines. Consecutive symptomatic spine patients (104 female and 120 male aged between 18 and 60) were recruited from three Iranian physiotherapy centers. Test-retest reliability was performed in a sub-sample (n = 31) at baseline and repeated between days 3-7. Convergent validity was determined by calculating the Pearson's r correlation coefficient between the SFI-Pr and the Persian Roland Morris Questionnaire (RMQ) for back pain patients and the Neck Disability Index (NDI) for neck patients. Internal consistency was assessed using Cronbach's α. Exploratory Factor Analysis (EFA) used Maximum Likelihood Extraction followed by Confirmatory Factor Analysis (CFA). High levels of internal consistency (α = 0.81, item range = 0.78-0.82) and test-retest reliability (r = 0.96, item range = 0.83-0.98) were obtained. Convergent validity was very good between the SFI and RMQ (r = 0.69) and good between the SFI and NDI (r = 0.57). The EFA from the perspective of parsimony suggests a one-factor solution that explained 26.5% of total variance. The CFA was inconclusive of the one factor structure as the sample size was inadequate. There were no floor or ceiling effects. The SFI-Pr PRO can be applied as a specific whole-spine status assessment instrument for clinical and research studies in Persian language populations.
Tucher Guilherme
2015-06-01
Full Text Available The reliability of the Functional Test for Agility Performance has only been evaluated in water polo players in a small group of novice athletes. Thus, the aim of this study was to evaluate the reliability of the Functional Test for Agility Performance in skilled water polo players. Forty-two athletes (17.81 ± 3.24 years old with a minimum of 5 years of competitive experience (7.05 ± 2.84 years and playing at the national or international level were evaluated. The Functional Test for Agility Performance is characterized as a specific open decision-making test where a tested player moves as quickly as possible in accordance to a pass made by another player. The time spent in the test was measured by two experienced coaches. Descriptive statistics, repeated measures analysis of variance (ANOVA, 95% limit of agreement (LOA, intraclass correlation coefficient (ICC and standard error of measurements (SEM were used for data analysis. Athletes completed the Functional Test for Agility Performance in 4.15 0.47 s. The ICC value was 0.87 (95% IC = 0.80-0.92. The SEM varied between 0.24 and 0.38 s. The LOA was 1.20 s and the CV average considering each individual trial was 6%. The Functional Test for Agility Performance was shown to be a reliable quick decision-making test for skilled water polo players.
A reliability analysis of the revised competitiveness index.
Harris, Paul B; Houston, John M
2010-06-01
This study examined the reliability of the Revised Competitiveness Index by investigating the test-retest reliability, interitem reliability, and factor structure of the measure based on a sample of 280 undergraduates (200 women, 80 men) ranging in age from 18 to 28 years (M = 20.1, SD = 2.1). The findings indicate that the Revised Competitiveness Index has high test-retest reliability, high inter-item reliability, and a stable factor structure. The results support the assertion that the Revised Competitiveness Index assesses competitiveness as a stable trait rather than a dynamic state.
Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.
1994-07-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the use the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification to report generation. Version 1.0 of the IRRAS program was released in February of 1987. Since then, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 5.0 and is the subject of this Reference Manual. Version 5.0 of IRRAS provides the same capabilities as earlier versions and ads the ability to perform location transformations, seismic analysis, and provides enhancements to the user interface as well as improved algorithm performance. Additionally, version 5.0 contains new alphanumeric fault tree and event used for event tree rules, recovery rules, and end state partitioning
Hustak, S.; Patrik, M.; Babic, P.
2000-12-01
The report is structured as follows: (i) Introduction; (ii) Important notions relating to the safety and dependability of software systems for nuclear power plants (selected notions from IAEA Technical Report No. 397; safety aspects of software application; reliability/dependability aspects of digital systems); (iii) Peculiarities of digital systems and ways to a dependable performance of the required function (failures in the system and principles of defence against them; ensuring resistance of digital systems against failures at various hardware and software levels); (iv) The issue of analytical procedures to assess the safety and reliability of safety-related digital systems (safety and reliability assessment at an early stage of the project; general framework of reliability analysis of complex systems; choice of an appropriate quantitative measure of software reliability); (v) Selected qualitative and quantitative information about the reliability of digital systems; the use of relations between the incidence of various types of faults); and (vi) Conclusions and recommendations. (P.A.)
Development of the integrated system reliability analysis code MODULE
Han, S.H.; Yoo, K.J.; Kim, T.W.
1987-01-01
The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers
Human reliability analysis for advanced control room of KNGR
Kim, Myung-Ro; Park, Seong-Kyu
2000-01-01
There are two purposes in Human Reliability Analysis (HRA) which was performed during Korean Next Generation Reactor (KNGR) Phase 2 research project. One is to present the human error probability quantification results for Probabilistic Safety Assessment (PSA) and the other is to provide a list of the critical operator actions for Human Factor Engineering (HFE). Critical operator actions were identified from the KNGR HRA/RSA based on selection criteria and incorporated in the MMI Task Analysis, where they receive additional treatment. The use of HRA/PSA results in design, procedure development, and training was ensured by their incorporation in the MMI task analysis and MCR design such as fixed position alarms, displays and controls. Any dominant PSA sequence that takes credit for human performance to achieve acceptable results was incorporated in MMIS validation activities through the PSA-based critical operator actions. The integration of KNGR HRA into MMI design was sufficiently addressed all applicable review criteria of NUREG-0800, Chapter 18, Section 2 F and NUREG-0711. (S.Y.)
Models and data requirements for human reliability analysis
1989-03-01
It has been widely recognised for many years that the safety of the nuclear power generation depends heavily on the human factors related to plant operation. This has been confirmed by the accidents at Three Mile Island and Chernobyl. Both these cases revealed how human actions can defeat engineered safeguards and the need for special operator training to cover the possibility of unexpected plant conditions. The importance of the human factor also stands out in the analysis of abnormal events and insights from probabilistic safety assessments (PSA's), which reveal a large proportion of cases having their origin in faulty operator performance. A consultants' meeting, organized jointly by the International Atomic Energy Agency (IAEA) and the International Institute for Applied Systems Analysis (IIASA) was held at IIASA in Laxenburg, Austria, December 7-11, 1987, with the aim of reviewing existing models used in Probabilistic Safety Assessment (PSA) for Human Reliability Analysis (HRA) and of identifying the data required. The report collects both the contributions offered by the members of the Expert Task Force and the findings of the extensive discussions that took place during the meeting. Refs, figs and tabs
Transfer function analysis of radiographic imaging systems
Metz, C.E.; Doi, K.
1979-01-01
The theoretical and experimental aspects of the techniques of transfer function analysis used in radiographic imaging systems are reviewed. The mathematical principles of transfer function analysis are developed for linear, shift-invariant imaging systems, for the relation between object and image and for the image due to a sinusoidal plane wave object. The other basic mathematical principle discussed is 'Fourier analysis' and its application to an input function. Other aspects of transfer function analysis included are alternative expressions for the 'optical transfer function' of imaging systems and expressions are derived for both serial and parallel transfer image sub-systems. The applications of transfer function analysis to radiographic imaging systems are discussed in relation to the linearisation of the radiographic imaging system, the object, the geometrical unsharpness, the screen-film system unsharpness, other unsharpness effects and finally noise analysis. It is concluded that extensive theoretical, computer simulation and experimental studies have demonstrated that the techniques of transfer function analysis provide an accurate and reliable means for predicting and understanding the effects of various radiographic imaging system components in most practical diagnostic medical imaging situations. (U.K.)
Desseroit, M [INSERM, LaTIM UMR 1101, Brest (France); EE DACTIM, CHU de Poitiers, Poitiers (France); Tixier, F; Cheze Le Rest, C [EE DACTIM, CHU de Poitiers, Poitiers (France); Majdoub, M; Visvikis, D; Hatt, M [INSERM, LaTIM UMR 1101, Brest (France); Weber, W [Memorial Sloan Kettering Cancer Center, New-york, NY (United States); Siegel, B [Washington University School of Medicine, St Louis, MO (United States)
2016-06-15
Purpose: The goal of this study was to evaluate the repeatability of radiomics features (intensity, shape and heterogeneity) in both PET and low-dose CT components of test-retest FDG-PET/CT images in a prospective multicenter cohort of 74 NSCLC patients from ACRIN 6678 and a similar Merck trial. Methods: Seventy-four patients with stage III-IV NCSLC were prospectively included. The primary tumor and up to 3 additional lesions per patient were analyzed. The Fuzzy Locally Adaptive Bayesian algorithm was used to automatically delineate metabolically active volume (MAV) in PET. The 3D SlicerTM software was exploited to delineate anatomical volumes (AV) in CT. Ten intensity first-order features, as well as 26 textural features and four 3D shape descriptors were calculated from tumour volumes in both modalities. The repeatability of each metric was assessed by Bland-Altman analysis. Results: One hundred and five lesions (primary tumors and nodal or distant metastases) were delineated and characterized. The MAV and AV determination had a repeatability of −1.4±11.0% and −1.2±18.7% respectively. Several shape and heterogeneity features were found to be highly or moderately repeatable (e.g., sphericity, co-occurrence entropy or intensity size-zone matrix zone percentage), whereas others were confirmed as unreliable with much higher variability (more than twice that of the corresponding volume determination). Conclusion: Our results in this large multicenter cohort with more than 100 measurements confirm the PET findings in previous studies (with <30 lesions). In addition, our study is the first to explore the repeatability of radiomics features in the low-dose CT component of PET/CT acquisitions (previous studies considered dosimetry CT, CE-CT or CBCT). Several features were identified as reliable in both PET and CT components and could be used to build prognostic models. This work has received a French government support granted to the CominLabs excellence laboratory
Sembiring, N.; Ginting, E.; Darnello, T.
2017-12-01
Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.
Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction
Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad
2018-03-01
In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.
Komal
2018-05-01
Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Network reliability analysis of complex systems using a non-simulation-based method
Kim, Youngsuk; Kang, Won-Hee
2013-01-01
Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.
Schaefer, H.
1987-01-01
GRS has been engaged in safety analysises of the German Reprocessing Plant for several years. The development and verification of appropriate reliability analysis methods, the generation of data as well as the search for an adequate structural presentation of the results to form a basis of recommendations for technical or administrative measures or contributions to risk oriented evaluations have been or are in the process of being established. In contrast to NPP-studies, the reliability assessment of safety systems of a reprocessing plant is applied to repairable and often relatively small systems allowing for tolerable system downtimes. A sketch of the diverse cooling systems of a vessel containing a selfheating solution is given. The interruption of the cooling function for about one day might be tolerable before boiling will be reached. This interval is suitable for transfer of the solution to a spare vessel or for repairing the failed components, thus restoring the cooling function
Diego MONTALTI et al
2012-01-01
Monomorphic birds cannot be sexed visually and discriminant functions on the basis of external morphological variations are frequently used. Our objective was to evaluate the reliability of sex classification functions created from structural measurements of Chilean flamingos Phoenicopterus chilensis museum skins for the gender assignment of live birds. Five measurements were used to develop four discriminant functions: culmen, bill height and width, tarsus length and middle toe claw. The fun...
Reliability analysis of land pipelines for hydrocarbons transportation in Mexico
Leon, D.; Cortes, C. [Inst. Mexicano del Petroleo (Mexico)
2004-07-01
The reliability of a land pipeline operated by PEMEX in Mexico was estimated under a range of failure modes. Reliability and safety were evaluated in terms of the pipeline's internal pressure, bending, fracture toughness and its tension failure mode conditions. Loading conditions were applied individually, while bending and tension loads were applied in a combined fashion. The mechanical properties of the steel were also considered along with the degradation effect of internal corrosion resulting from the type of product being transported. A set of internal pressures and mechanical properties were generated randomly using Monte Carlo simulation. Commercial software was used to obtain the pipeline response under each modeled condition. The response analysis was based on the nonlinear finite element method involving a calculation of maximum stresses and stress concentration factors under conditions of corrosion and no corrosion. The margin between maximum stresses due to internal pressure, tension and bending was evaluated along with the margin between stress concentration factor and fracture initiation toughness. The study showed that internal pressure, stress concentration and tension-bending were the critical failure modes. It was suggested that more research should be conducted to improve the modeling of the deteriorating effects of corrosion and to adjust the probability distribution for fracture toughness and the length/depth defect ratio. The consideration of welding geometries along with features of marine pipelines and pipeline products would help to generalize the study to facilitate the creation of codes for the construction, design, inspection and maintenance of pipelines in Mexico. 7 refs., 1 tab., 14 figs.
Ceramics Analysis and Reliability Evaluation of Structures (CARES). Users and programmers manual
Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.
1990-01-01
This manual describes how to use the Ceramics Analysis and Reliability Evaluation of Structures (CARES) computer program. The primary function of the code is to calculate the fast fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. The program uses results from MSC/NASTRAN or ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effect of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or unifrom uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-square analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, ninety percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan ninety percent confidence band values are also provided. The probabilistic fast-fracture theories used in CARES, along with the input and output for CARES, are described. Example problems to demonstrate various feature of the program are also included. This manual describes the MSC/NASTRAN version of the CARES program.
The reliability of mercury analysis in environmental materials
Heinonen, J.; Suschny, O.
1973-01-01
Mercury occurs in nature in its native elemental as well as in different mineral forms. It has been mined for centuries and is used in many branches of industry, agriculture and medicine. Mercury is very toxic to man and reports of poisoning due to the presence of the element in fish and shellfish caught at Minamata and Niigata, Japan have led not only to local investigations but to multi-national research into the sources and the levels of mercury in the environment. The concentrations at which the element has to be determined in these studies are extremely small, usually of the order of a few parts in 10 9 parts of environmental material. Few analytical techniques provide the required sensitivity for analysis at such low concentrations, and only two are normally used for mercury: neutron activation analysis and atomic absorption photometry. They are also the most convenient end points of various separation schemes for different organic mercury compounds. Mercury analysis at the ppb-level is beset with many problems: volatility of the metal and its compounds, impurity of reagents, interference by other elements and many other analytical difficulties may influence the results. To be able to draw valid conclusions from the analyses it is necessary to know the reliability attached to the values obtained. To assist laboratories in the evaluation of their analytical performance, the International Atomic Energy Agency through its own laboratory at Seibersdorf already organised in 1967 an intercomparison of mercury analysis in flour. Based on the results obtained at that time, a whole series of intercomparisons of mercury determinations in nine different environmental materials was undertaken in 1971. The materials investigated included corn and wheat flour, spray-dried animal blood serum, fish solubles, milk powder, saw dust, cellulose, lacquer paint and coloric material
The reliability of mercury analysis in environmental materials
Heinonen, J.; Suschny, O
1973-01-01
Mercury occurs in nature in its native elemental as well as in different mineral forms. It has been mined for centuries and is used in many branches of industry, agriculture and medicine. Mercury is very toxic to man and reports of poisoning due to the presence of the element in fish and shellfish caught at Minamata and Niigata, Japan have led not only to local investigations but to multi-national research into the sources and the levels of mercury in the environment. The concentrations at which the element has to be determined in these studies are extremely small, usually of the order of a few parts in 10{sup 9} parts of environmental material. Few analytical techniques provide the required sensitivity for analysis at such low concentrations, and only two are normally used for mercury: neutron activation analysis and atomic absorption photometry. They are also the most convenient end points of various separation schemes for different organic mercury compounds. Mercury analysis at the ppb-level is beset with many problems: volatility of the metal and its compounds, impurity of reagents, interference by other elements and many other analytical difficulties may influence the results. To be able to draw valid conclusions from the analyses it is necessary to know the reliability attached to the values obtained. To assist laboratories in the evaluation of their analytical performance, the International Atomic Energy Agency through its own laboratory at Seibersdorf already organised in 1967 an intercomparison of mercury analysis in flour. Based on the results obtained at that time, a whole series of intercomparisons of mercury determinations in nine different environmental materials was undertaken in 1971. The materials investigated included corn and wheat flour, spray-dried animal blood serum, fish solubles, milk powder, saw dust, cellulose, lacquer paint and coloric material.
Multi-Unit Considerations for Human Reliability Analysis
St. Germain, S.; Boring, R.; Banaseanu, G.; Akl, Y.; Chatri, H.
2017-03-01
This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact the selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.
Strange functions in real analysis
Kharazishvili, AB
2005-01-01
Weierstrass and Blancmange nowhere differentiable functions, Lebesgue integrable functions with everywhere divergent Fourier series, and various nonintegrable Lebesgue measurable functions. While dubbed strange or "pathological," these functions are ubiquitous throughout mathematics and play an important role in analysis, not only as counterexamples of seemingly true and natural statements, but also to stimulate and inspire the further development of real analysis.Strange Functions in Real Analysis explores a number of important examples and constructions of pathological functions. After introducing the basic concepts, the author begins with Cantor and Peano-type functions, then moves to functions whose constructions require essentially noneffective methods. These include functions without the Baire property, functions associated with a Hamel basis of the real line, and Sierpinski-Zygmund functions that are discontinuous on each subset of the real line having the cardinality continuum. Finally, he considers e...
78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard
2013-07-29
...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)
2008-07-01
In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)
Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.
2008-01-01
In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)
Shirley, Rachel Elizabeth
Nuclear power plant (NPP) simulators are proliferating in academic research institutions and national laboratories in response to the availability of affordable, digital simulator platforms. Accompanying the new research facilities is a renewed interest in using data collected in NPP simulators for Human Reliability Analysis (HRA) research. An experiment conducted in The Ohio State University (OSU) NPP Simulator Facility develops data collection methods and analytical tools to improve use of simulator data in HRA. In the pilot experiment, student operators respond to design basis accidents in the OSU NPP Simulator Facility. Thirty-three undergraduate and graduate engineering students participated in the research. Following each accident scenario, student operators completed a survey about perceived simulator biases and watched a video of the scenario. During the video, they periodically recorded their perceived strength of significant Performance Shaping Factors (PSFs) such as Stress. This dissertation reviews three aspects of simulator-based research using the data collected in the OSU NPP Simulator Facility: First, a qualitative comparison of student operator performance to computer simulations of expected operator performance generated by the Information Decision Action Crew (IDAC) HRA method. Areas of comparison include procedure steps, timing of operator actions, and PSFs. Second, development of a quantitative model of the simulator bias introduced by the simulator environment. Two types of bias are defined: Environmental Bias and Motivational Bias. This research examines Motivational Bias--that is, the effect of the simulator environment on an operator's motivations, goals, and priorities. A bias causal map is introduced to model motivational bias interactions in the OSU experiment. Data collected in the OSU NPP Simulator Facility are analyzed using Structural Equation Modeling (SEM). Data include crew characteristics, operator surveys, and time to recognize
Reliability and risk analysis data base development: an historical perspective
Fragola, Joseph R.
1996-01-01
Collection of empirical data and data base development for use in the prediction of the probability of future events has a long history. Dating back at least to the 17th century, safe passage events and mortality events were collected and analyzed to uncover prospective underlying classes and associated class attributes. Tabulations of these developed classes and associated attributes formed the underwriting basis for the fledgling insurance industry. Much earlier, master masons and architects used design rules of thumb to capture the experience of the ages and thereby produce structures of incredible longevity and reliability (Antona, E., Fragola, J. and Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings, Rome, Italy, 18-20 October 1993). These rules served so well in producing robust designs that it was not until almost the 19th century that the analysis (Charlton, T.M., A History Of Theory Of Structures In The 19th Century, Cambridge University Press, Cambridge, UK, 1982) of masonry voussoir arches, begun by Galileo some two centuries earlier (Galilei, G. Discorsi e dimostrazioni mathematiche intorno a due nuove science, (Discourses and mathematical demonstrations concerning two new sciences, Leiden, The Netherlands, 1638), was placed on a sound scientific basis. Still, with the introduction of new materials (such as wrought iron and steel) and the lack of theoretical knowledge and computational facilities, approximate methods of structural design abounded well into the second half of the 20th century. To this day structural designers account for material variations and gaps in theoretical knowledge by employing factors of safety (Benvenuto, E., An Introduction to the History of Structural Mechanics, Part II: Vaulted Structures and Elastic Systems, Springer-Verlag, NY, 1991) or codes of practice (ASME Boiler and Pressure Vessel Code, ASME, New York) originally developed in the 19th century (Antona, E., Fragola, J. and
Reliability analysis of the service water system of Angra 1 reactor
Tayt-Sohn, L.C.; Oliveira, L.F.S. de.
1984-01-01
A reliability analysis of the service water system is done aiming to use in the evaluation of the non reliability of the Component Cooling System (SRC) for great loss of cooling accidents in nuclear power plants. (E.G.) [pt
Reliability analysis of the service water system of Angra 1 reactor
Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.
1983-01-01
A reliability analysis of the service water system is done aiming to use in the evaluation of the non reliability of the component cooling system (SRC) for great loss of cooling accidents in nuclear power plants. (E.G.) [pt
Human Reliability Analysis in Support of Risk Assessment for Positive Train Control
2003-06-01
This report describes an approach to evaluating the reliability of human actions that are modeled in a probabilistic risk assessment : (PRA) of train control operations. This approach to human reliability analysis (HRA) has been applied in the case o...
Cepin, M.
2008-01-01
The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan human reliability analysis (IJS-HRA) and standardized plant analysis risk human reliability analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance
Cepis, M.
2007-01-01
The Human Reliability Analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease of subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan - Human Reliability Analysis (IJS-HRA) and Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance. (author)
Theoretical numerical analysis a functional analysis framework
Atkinson, Kendall
2005-01-01
This textbook prepares graduate students for research in numerical analysis/computational mathematics by giving to them a mathematical framework embedded in functional analysis and focused on numerical analysis. This helps the student to move rapidly into a research program. The text covers basic results of functional analysis, approximation theory, Fourier analysis and wavelets, iteration methods for nonlinear equations, finite difference methods, Sobolev spaces and weak formulations of boundary value problems, finite element methods, elliptic variational inequalities and their numerical solu
Operator reliability analysis during NPP small break LOCA
Zhang Jiong; Chen Shenglin
1990-01-01
To assess the human factor characteristic of a NPP main control room (MCR) design, the MCR operator reliability during a small break LOCA is analyzed, and some approaches for improving the MCR operator reliability are proposed based on the analyzing results
Wind turbine reliability : a database and analysis approach.
Linsday, James (ARES Corporation); Briand, Daniel; Hill, Roger Ray; Stinebaugh, Jennifer A.; Benjamin, Allan S. (ARES Corporation)
2008-02-01
The US wind Industry has experienced remarkable growth since the turn of the century. At the same time, the physical size and electrical generation capabilities of wind turbines has also experienced remarkable growth. As the market continues to expand, and as wind generation continues to gain a significant share of the generation portfolio, the reliability of wind turbine technology becomes increasingly important. This report addresses how operations and maintenance costs are related to unreliability - that is the failures experienced by systems and components. Reliability tools are demonstrated, data needed to understand and catalog failure events is described, and practical wind turbine reliability models are illustrated, including preliminary results. This report also presents a continuing process of how to proceed with controlling industry requirements, needs, and expectations related to Reliability, Availability, Maintainability, and Safety. A simply stated goal of this process is to better understand and to improve the operable reliability of wind turbine installations.
Sözer, Hasan; Tekinerdogan, B.; Aksit, Mehmet; de Lemos, Rogerio; Gacek, Cristina
2007-01-01
Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.
Reliability analysis of water distribution systems under uncertainty
Kansal, M.L.; Kumar, Arun; Sharma, P.B.
1995-01-01
In most of the developing countries, the Water Distribution Networks (WDN) are of intermittent type because of the shortage of safe drinking water. Failure of a pipeline(s) in such cases will cause not only the fall in one or more nodal heads but also the poor connectivity of source with various demand nodes of the system. Most of the previous works have used the two-step algorithm based on pathset or cutset approach for connectivity analysis. The computations become more cumbersome when connectivity of all demand nodes taken together with that of supply is carried out. In the present paper, network connectivity based on the concept of Appended Spanning Tree (AST) is suggested to compute global network connectivity which is defined as the probability of the source node being connected with all the demand nodes simultaneously. The concept of AST has distinct advantages as it attacks the problem directly rather than in an indirect way as most of the studies so far have done. Since the water distribution system is a repairable one, a general expression for pipeline avialability using the failure/repair rate is considered. Furthermore, the sensitivity of global reliability estimates due to the likely error in the estimation of failure/repair rates of various pipelines is also studied
CRITICAL ANALYSIS OF THE RELIABILITY OF INTUITIVE MORAL DECISIONS
V. V. Nadurak
2017-06-01
Full Text Available Purpose of the research is a critical analysis of the reliability of intuitive moral decisions. Methodology. The work is based on the methodological attitude of empirical ethics, involving the use of findings from empirical research in ethical reflection and decision making. Originality. The main kinds of intuitive moral decisions are identified: 1 intuitively emotional decisions (i.e. decisions made under the influence of emotions that accompanies the process of moral decision making; 2 decisions made under the influence of moral risky psychological aptitudes (unconscious human tendencies that makes us think in a certain way and make decisions, unacceptable from the logical and ethical point of view; 3 intuitively normative decisions (decisions made under the influence of socially learned norms, that cause evaluative feeling «good-bad», without conscious reasoning. It was found that all of these kinds of intuitive moral decisions can lead to mistakes in the moral life. Conclusions. Considering the fact that intuition systematically leads to erroneous moral decisions, intuitive reaction cannot be the only source for making such decisions. The conscious rational reasoning can compensate for weaknesses of intuition. In this case, there is a necessity in theoretical model that would structure the knowledge about the interactions between intuitive and rational factors in moral decisions making and became the basis for making suggestions that would help us to make the right moral decision.
Reliability analysis of load-sharing systems with memory.
Wang, Dewei; Jiang, Chendi; Park, Chanseok
2018-02-22
The load-sharing model has been studied since the early 1940s to account for the stochastic dependence of components in a parallel system. It assumes that, as components fail one by one, the total workload applied to the system is shared by the remaining components and thus affects their performance. Such dependent systems have been studied in many engineering applications which include but are not limited to fiber composites, manufacturing, power plants, workload analysis of computing, software and hardware reliability, etc. Many statistical models have been proposed to analyze the impact of each redistribution of the workload; i.e., the changes on the hazard rate of each remaining component. However, they do not consider how long a surviving component has worked for prior to the redistribution. We name such load-sharing models as memoryless. To remedy this potential limitation, we propose a general framework for load-sharing models that account for the work history. Through simulation studies, we show that an inappropriate use of the memoryless assumption could lead to inaccurate inference on the impact of redistribution. Further, a real-data example of plasma display devices is analyzed to illustrate our methods.
Yeh, Wei-Chang
Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.
Procedure for conducting a human-reliability analysis for nuclear power plants. Final report
Bell, B.J.; Swain, A.D.
1983-05-01
This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study
User's manual of SECOM2: a computer code for seismic system reliability analysis
Uchiyama, Tomoaki; Oikawa, Tetsukuni; Kondo, Masaaki; Tamura, Kazuo
2002-03-01
This report is the user's manual of seismic system reliability analysis code SECOM2 (Seismic Core Melt Frequency Evaluation Code Ver.2) developed at the Japan Atomic Energy Research Institute for systems reliability analysis, which is one of the tasks of seismic probabilistic safety assessment (PSA) of nuclear power plants (NPPs). The SECOM2 code has many functions such as: Calculation of component failure probabilities based on the response factor method, Extraction of minimal cut sets (MCSs), Calculation of conditional system failure probabilities for given seismic motion levels at the site of an NPP, Calculation of accident sequence frequencies and the core damage frequency (CDF) with use of the seismic hazard curve, Importance analysis using various indicators, Uncertainty analysis, Calculation of the CDF taking into account the effect of the correlations of responses and capacities of components, and Efficient sensitivity analysis by changing parameters on responses and capacities of components. These analyses require the fault tree (FT) representing the occurrence condition of the system failures and core damage, information about response and capacity of components and seismic hazard curve for the NPP site as inputs. This report presents the models and methods applied in the SECOM2 code and how to use those functions. (author)
Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.
2016-01-01
Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…
Effects of Differential Item Functioning on Examinees' Test Performance and Reliability of Test
Lee, Yi-Hsuan; Zhang, Jinming
2017-01-01
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Emotional-volitional components of operator reliability. [sensorimotor function testing under stress
Mileryan, Y. A.
1975-01-01
Sensorimotor function testing in a tracking task under stressfull working conditions established a psychological characterization for a successful aviation pilot: Motivation significantly increased the reliability and effectiveness of their work. Their acitivities were aimed at suppressing weariness and the feeling of fear caused by the stress factors; they showed patience, endurance, persistence, and a capacity for lengthy volitional efforts.
Reliability of the EK scale, a functional test for non-ambulatory persons with Duchenne dystrophy
Steffensen, Birgit F.; Hyde, Sylvia A.; Attermann, Jørn
2002-01-01
The EK {Egen Klassifikation} scale was developed to assess overall functional ability in the non-ambulatory stage of Duchenne muscular dystrophy (DMD). The purpose of this study was to examine the reliability of the EK scale. Six subjects with DMD, selected as representative of the entire range...
Reneman, MF; Brouwer, S; Meinema, A; Dijkstra, PU; Geertzen, JHB; Groothoff, JW
2004-01-01
Aim of this study was to investigate test-retest reliability of the Isernhagen Work System Functional Capacity Evaluation (IWS FCE) in healthy subjects. The IWS FCE consists of 28 tests that reflect work-related activities such as lifting, carrying, bending, etc. A convenience sample of 26 healthy
Hansen, Flemming Yssing; Carneiro, K.
1977-01-01
A simple numerical method, which unifies the calculation of structure factors from X-ray or neutron diffraction data with the calculation of reliable pair distribution functions, is described. The objective of the method is to eliminate systematic errors in the normalizations and corrections of t...
Cintas, Holly Lea; Parks, Rebecca; Don, Sarah; Gerber, Lynn
2011-01-01
Content validity and reliability of the Brief Assessment of Motor Function (BAMF) Upper Extremity Gross Motor Scale (UEGMS) were evaluated in this prospective, descriptive study. The UEGMS is one of five BAMF ordinal scales designed for quick documentation of gross, fine, and oral motor skill levels. Designed to be independent of age and…
Marques, M.; Bassi, C.; Bentivoglio, F.
2012-01-01
In support to a PSA (Probability Safety Assessment) performed at the design level on the 2400 MWth Gas-cooled Fast Reactor, the functional reliability of the decay heat removal system (DHR) working in natural circulation has been estimated in two transient situations corresponding to an 'aggravated' Loss of Flow Accident (LOFA) and a Loss of Coolant Accident (LOCA). The reliability analysis was based on the RMPS methodology. Reliability and global sensitivity analyses use uncertainty propagation by Monte Carlo techniques. The DHR system consists of 1) 3 dedicated DHR loops: the choice of 3 loops (3*100% redundancy) is made in assuming that one could be lost due to the accident initiating event (break for example) and that another one must be supposed unavailable (single failure criterion); 2) a metallic guard containment enclosing the primary system (referred as close containment), not pressurized in normal operation, having a free volume such as the fast primary helium expansion gives an equilibrium pressure of 1.0 MPa, in the first part of the transient (few hours). Each dedicated DHR loop designed to work in forced circulation with blowers or in natural circulation, is composed of 1) a primary loop (cross-duct connected to the core vessel), with a driving height of 10 meters between core and DHX mid-plan; 2) a secondary circuit filled with pressurized water at 1.0 MPa (driving height of 5 meters for natural circulation DHR); 3) a ternary pool, initially at 50 C. degrees, whose volume is determined to handle one day heat extraction (after this time delay, additional measures are foreseen to fill up the pool). The results obtained on the reliability of the DHR system and on the most important input parameters are very different from one scenario to the other showing the necessity for the PSA to perform specific reliability analysis of the passive system for each considered scenario. The analysis shows that the DHR system working in natural circulation is
Spinal appearance questionnaire: factor analysis, scoring, reliability, and validity testing.
Carreon, Leah Y; Sanders, James O; Polly, David W; Sucato, Daniel J; Parent, Stefan; Roy-Beaudry, Marjolaine; Hopkins, Jeffrey; McClung, Anna; Bratcher, Kelly R; Diamond, Beverly E
2011-08-15
Cross sectional. This study presents the factor analysis of the Spinal Appearance Questionnaire (SAQ) and its psychometric properties. Although the SAQ has been administered to a large sample of patients with adolescent idiopathic scoliosis (AIS) treated surgically, its psychometric properties have not been fully evaluated. This study presents the factor analysis and scoring of the SAQ and evaluates its psychometric properties. The SAQ and the Scoliosis Research Society-22 (SRS-22) were administered to AIS patients who were being observed, braced or scheduled for surgery. Standard demographic data and radiographic measures including Lenke type and curve magnitude were also collected. Of the 1802 patients, 83% were female; with a mean age of 14.8 years and mean initial Cobb angle of 55.8° (range, 0°-123°). From the 32 items of the SAQ, 15 loaded on two factors with consistent and significant correlations across all Lenke types. There is an Appearance (items 1-10) and an Expectations factor (items 12-15). Responses are summed giving a range of 5 to 50 for the Appearance domain and 5 to 20 for the Expectations domain. The Cronbach's α was 0.88 for both domains and Total score with a test-retest reliability of 0.81 for Appearance and 0.91 for Expectations. Correlations with major curve magnitude were higher for the SAQ Appearance and SAQ Total scores compared to correlations between the SRS Appearance and SRS Total scores. The SAQ and SRS-22 Scores were statistically significantly different in patients who were scheduled for surgery compared to those who were observed or braced. The SAQ is a valid measure of self-image in patients with AIS with greater correlation to curve magnitude than SRS Appearance and Total score. It also discriminates between patients who require surgery from those who do not.
An efficient phased mission reliability analysis for autonomous vehicles
Remenyte-Prescott, R., E-mail: R.Remenyte-Prescott@nottingham.ac.u [Nottingham Transportation Engineering Centre, Faculty of Engineering, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Andrews, J.D. [Nottingham Transportation Engineering Centre, Faculty of Engineering, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Chung, P.W.H. [Department of Computer Science, Loughborough University, Loughborough LE11 3TU (United Kingdom)
2010-03-15
Autonomous systems are becoming more commonly used, especially in hazardous situations. Such systems are expected to make their own decisions about future actions when some capabilities degrade due to failures of their subsystems. Such decisions are made without human input, therefore they need to be well-informed in a short time when the situation is analysed and future consequences of the failure are estimated. The future planning of the mission should take account of the likelihood of mission failure. The reliability analysis for autonomous systems can be performed using the methodologies developed for phased mission analysis, where the causes of failure for each phase in the mission can be expressed by fault trees. Unmanned autonomous vehicles (UAVs) are of a particular interest in the aeronautical industry, where it is a long term ambition to operate them routinely in civil airspace. Safety is the main requirement for the UAV operation and the calculation of failure probability of each phase and the overall mission is the topic of this paper. When components or subsystems fail or environmental conditions throughout the mission change, these changes can affect the future mission. The new proposed methodology takes into account the available diagnostics data and is used to predict future capabilities of the UAV in real time. Since this methodology is based on the efficient BDD method, the quickly provided advice can be used in making decisions. When failures occur appropriate actions are required in order to preserve safety of the autonomous vehicle. The overall decision making strategy for autonomous vehicles is explained in this paper. Some limitations of the methodology are discussed and further improvements are presented based on experimental results.
An efficient phased mission reliability analysis for autonomous vehicles
Remenyte-Prescott, R.; Andrews, J.D.; Chung, P.W.H.
2010-01-01
Autonomous systems are becoming more commonly used, especially in hazardous situations. Such systems are expected to make their own decisions about future actions when some capabilities degrade due to failures of their subsystems. Such decisions are made without human input, therefore they need to be well-informed in a short time when the situation is analysed and future consequences of the failure are estimated. The future planning of the mission should take account of the likelihood of mission failure. The reliability analysis for autonomous systems can be performed using the methodologies developed for phased mission analysis, where the causes of failure for each phase in the mission can be expressed by fault trees. Unmanned autonomous vehicles (UAVs) are of a particular interest in the aeronautical industry, where it is a long term ambition to operate them routinely in civil airspace. Safety is the main requirement for the UAV operation and the calculation of failure probability of each phase and the overall mission is the topic of this paper. When components or subsystems fail or environmental conditions throughout the mission change, these changes can affect the future mission. The new proposed methodology takes into account the available diagnostics data and is used to predict future capabilities of the UAV in real time. Since this methodology is based on the efficient BDD method, the quickly provided advice can be used in making decisions. When failures occur appropriate actions are required in order to preserve safety of the autonomous vehicle. The overall decision making strategy for autonomous vehicles is explained in this paper. Some limitations of the methodology are discussed and further improvements are presented based on experimental results.
Lars Donath
2017-10-01
Full Text Available Abstract Background Aging is accompanied by a decline of executive function. Aerobic exercise training induces moderate improvements of cognitive domains (i.e., attention, processing, executive function, memory in seniors. Most conclusive data are obtained from studies with dementia or cognitive impairment. Confident detection of exercise training effects requires adequate between-day reliability and low day-to-day variability obtained from acute studies, respectively. These absolute and relative reliability measures have not yet been examined for a single aerobic training session in seniors. Methods Twenty-two healthy and physically active seniors (age: 69 ± 3 y, BMI: 24.8 ± 2.2, VO2peak: 32 ± 6 mL/kg/bodyweight were enrolled in this randomized controlled cross-over study. A repeated between-day comparison [i.e., day 1 (habituation vs. day 2 & day 2 vs. day 3] of executive function testing (Eriksen-Flanker-Test, Stroop-Color-Test, Digit-Span, Five-Point-Test before and after aerobic cycling exercise at 70% of the heart rate reserve [0.7 × (HRmax – HRrest] was conducted. Reliability measures were calculated for pre, post and change scores. Results Large between-day differences between day 1 and 2 were found for reaction times (Flanker- and Stroop Color testing and completed figures (Five-Point test at pre and post testing (0.002 < p < 0.05, 0.16 < ɳp 2 < 0.38. These differences notably declined when comparing day 2 and 3. Absolute between days variability (CoV dropped from 10 to 5% when comparing day 2 vs. day 3 instead of day 1 vs. day 2. Also ICC ranges increased from day 1 vs. day 2 (0.65 < ICC < 0.87 to day 2 vs. day 3 (0.40 < ICC < 0.93. Interestingly, reliability measures for pre-post change scores were low (0.02 < ICC < 0.71. These data did not improve when comparing day 2 with day 3. During inhibition tests, reaction times showed excellent reliability values compared to the poor to fair reliability of
Iskandar, Ismed; Gondokaryono, Yudi Satria
2016-01-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range
Distribution-level electricity reliability: Temporal trends using statistical analysis
Eto, Joseph H.; LaCommare, Kristina H.; Larsen, Peter; Todd, Annika; Fisher, Emily
2012-01-01
This paper helps to address the lack of comprehensive, national-scale information on the reliability of the U.S. electric power system by assessing trends in U.S. electricity reliability based on the information reported by the electric utilities on power interruptions experienced by their customers. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. We find that reported annual average duration and annual average frequency of power interruptions have been increasing over time at a rate of approximately 2% annually. We find that, independent of this trend, installation or upgrade of an automated outage management system is correlated with an increase in the reported annual average duration of power interruptions. We also find that reliance on IEEE Standard 1366-2003 is correlated with higher reported reliability compared to reported reliability not using the IEEE standard. However, we caution that we cannot attribute reliance on the IEEE standard as having caused or led to higher reported reliability because we could not separate the effect of reliance on the IEEE standard from other utility-specific factors that may be correlated with reliance on the IEEE standard. - Highlights: ► We assess trends in electricity reliability based on the information reported by the electric utilities. ► We use rigorous statistical techniques to account for utility-specific differences. ► We find modest declines in reliability analyzing interruption duration and frequency experienced by utility customers. ► Installation or upgrade of an OMS is correlated to an increase in reported duration of power interruptions. ► We find reliance in IEEE Standard 1366 is correlated with higher reported reliability.
Functional Analysis in Interdisciplinary Applications
Nursultanov, Erlan; Ruzhansky, Michael; Sadybekov, Makhmud
2017-01-01
This volume presents current research in functional analysis and its applications to a variety of problems in mathematics and mathematical physics. The book contains over forty carefully refereed contributions to the conference “Functional Analysis in Interdisciplinary Applications” (Astana, Kazakhstan, October 2017). Topics covered include the theory of functions and functional spaces; differential equations and boundary value problems; the relationship between differential equations, integral operators and spectral theory; and mathematical methods in physical sciences. Presenting a wide range of topics and results, this book will appeal to anyone working in the subject area, including researchers and students interested to learn more about different aspects and applications of functional analysis.
Analysis of complete logical structures in system reliability assessment
Amendola, A.; Clarotti, C.A.; Contini, S.; Spizzichino, F.
1980-01-01
The application field of the fault-tree techniques has been explored in order to assess whether the AND-OR structures covered all possible actual binary systems. This resulted in the identification of various situations requiring the complete AND-OR-NOT structures for their analysis. We do not use the term non-coherent for such cases, since the monotonicity or not of a structure function is not a characteristic of a system, but of the particular top event being examined. The report presents different examples of complete fault-trees, which can be examined according to different degrees of approximation. In fact, the exact analysis for the determination of the smallest irredundant bases is very time consuming and actually necessary only in some particular cases (multi-state systems, incidental situations). Therefore, together with the exact procedure, the report shows two different methods of logical analysis that permit the reduction of complete fault-trees to AND-OR structures. Moreover, it discusses the problems concerning the evaluation of the probability distribution of the time to first top event occurrence, once the hypothesis of structure function monotonicity is removed
Gonnot, R.
1975-01-01
While it may be reasonable to assume that the reliability of a system - the design of which is perfectly known - can be evaluated, it seems less easy to be sure that overall reliability is correctly estimated in the case of multiple redundancies arranged in sequence. Framatome is trying to develop a method of evaluating overall reliability correctly for its installations. For example, the protection systems in its power stations considered as a whole are such that several scram signals may be relayed in sequence when an incident occurs. These signals all involve the same components for a given type of action, but the components themselves are in fact subject to different stresses and constraints, which tend to reduce their reliability. Whatever the sequence in which these signals are transmitted (in a fast-developing accident, for example), it is possible to evaluate the actual reliability of a given system (or component) for different constraints, as the latter are generally obtained via the transient codes. By applying the so-called ''equal probability'' hypothesis one can estimate a reliability acceleration function taking into account the constraints imposed. This function is linear for the principal failure probability distribution laws. By generalizing such a method one can: (1) Perform failure calculations for redundant systems (or components) in a more general way than is possible with event trees, since one of the main parameters is the constraint exercised on that system (or component); (2) Determine failure rates of components on the basis of accelerated tests (up to complete failure of the component) which are quicker than the normal long-term tests (statistical results of operation); (3) Evaluate the multiplication factor for the reliability of a system or component in the case of common mode failures. The author presents the mathematical tools required for such a method and described their application in the cases mentioned above
Modeling cognition dynamics and its application to human reliability analysis
Mosleh, A.; Smidts, C.; Shen, S.H.
1996-01-01
For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects
Basdekas, D.L.
1989-05-01
Generic Issue 115 addresses a concern related to the reliability of the Westinghouse reactor protection system for plants using the Westinghouse Solid State Protection System (SSPS). Several options for improving the reliability of the Westinghouse reactor trip function for these plants and their effect on core damage frequency (CDF) and overall risk were evaluated. This regulatory analysis includes a quantitative assessment of the costs and benefits associated with the various options for enhancing the reliability of the Westinghouse SSPS and provides insights for consideration and industry initiatives. No new regulatory requirements are proposed. 25 refs., 11 tabs
I. S. Shumilov
2017-01-01
Full Text Available The paper deals with design requirements for an aviation fuel system (AFS, AFS basic design requirements, reliability, and design precautions to avoid AFS failure. Compares the reliability and fail-safety of AFS and aircraft hydraulic system (AHS, considers the promising alternative ways to raise reliability of fuel systems, as well as elaborates recommendations to improve reliability of the pipeline system components and pipeline systems, in general, based on the selection of design solutions.It is extremely advisable to design the AFS and AHS in accordance with Aviation Regulations АП25 and Accident Prevention Guidelines, ICAO (International Civil Aviation Association, which will reduce risk of emergency situations, and in some cases even avoid heavy disasters.ATS and AHS designs should be based on the uniform principles to ensure the highest reliability and safety. However, currently, this principle is not enough kept, and AFS looses in reliability and fail-safety as compared with AHS. When there are the examined failures (single and their combinations the guidelines to ensure the AFS efficiency should be the same as those of norm-adopted in the Regulations АП25 for AHS. This will significantly increase reliability and fail-safety of the fuel systems and aircraft flights, in general, despite a slight increase in AFS mass.The proposed improvements through the use of components redundancy of the fuel system will greatly raise reliability of the fuel system of a passenger aircraft, which will, without serious consequences for the flight, withstand up to 2 failures, its reliability and fail-safety design will be similar to those of the AHS, however, above improvement measures will lead to a slightly increasing total mass of the fuel system.It is advisable to set a second pump on the engine in parallel with the first one. It will run in case the first one fails for some reasons. The second pump, like the first pump, can be driven from the
Advanced reactor passive system reliability demonstration analysis for an external event
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin
2017-01-01
Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event
Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event
Matthew Bucknor
2017-03-01
Full Text Available Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general for the postulated transient event.
Advanced reactor passive system reliability demonstration analysis for an external event
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin [Argonne National Laboratory, Argonne (United States)
2017-03-15
Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.
Operational present status and reliability analysis of the upgraded EAST cryogenic system
Zhou, Z. W.; Y Zhang, Q.; Lu, X. F.; Hu, L. B.; Zhu, P.
2017-12-01
Since the first commissioning in 2005, the cryogenic system for EAST (Experimental Advanced Superconducting Tokamak) has been cooled down and warmed up for thirteen experimental campaigns. In order to promote the refrigeration efficiencies and reliability, the EAST cryogenic system was upgraded gradually with new helium screw compressors and new dynamic gas bearing helium turbine expanders with eddy current brake to improve the original poor mechanical and operational performance from 2012 to 2015. Then the totally upgraded cryogenic system was put into operation in the eleventh cool-down experiment, and has been operated for the latest several experimental campaigns. The upgraded system has successfully coped with various normal operational modes during cool-down and 4.5 K steady-state operation under pulsed heat load from the tokamak as well as the abnormal fault modes including turbines protection stop. In this paper, the upgraded EAST cryogenic system including its functional analysis and new cryogenic control networks will be presented in detail. Also, its operational present status in the latest cool-down experiments will be presented and the system reliability will be analyzed, which shows a high reliability and low fault rate after upgrade. In the end, some future necessary work to meet the higher reliability requirement for future uninterrupted long-term experimental operation will also be proposed.
Reliability analysis of reinforced concrete grids with nonlinear material behavior
Neves, Rodrigo A [EESC-USP, Av. Trabalhador Sao Carlense, 400, 13566-590 Sao Carlos (Brazil); Chateauneuf, Alaa [LaMI-UBP and IFMA, Campus de Clermont-Fd, Les Cezeaux, BP 265, 63175 Aubiere cedex (France)]. E-mail: alaa.chateauneuf@ifma.fr; Venturini, Wilson S [EESC-USP, Av. Trabalhador Sao Carlense, 400, 13566-590 Sao Carlos (Brazil)]. E-mail: venturin@sc.usp.br; Lemaire, Maurice [LaMI-UBP and IFMA, Campus de Clermont-Fd, Les Cezeaux, BP 265, 63175 Aubiere cedex (France)
2006-06-15
Reinforced concrete grids are usually used to support large floor slabs. These grids are characterized by a great number of critical cross-sections, where the overall failure is usually sudden. However, nonlinear behavior of concrete leads to the redistribution of internal forces and accurate reliability assessment becomes mandatory. This paper presents a reliability study on reinforced concrete (RC) grids based on coupling Monte Carlo simulations with the response surface techniques. This approach allows us to analyze real RC grids with large number of failure components. The response surface is used to evaluate the structural safety by using first order reliability methods. The application to simple grids shows the interest of the proposed method and the role of moment redistribution in the reliability assessment.
Analysis of travel time reliability on Indiana interstates.
2009-09-15
Travel-time reliability is a key performance measure in any transportation system. It is a : measure of quality of travel time experienced by transportation system users and reflects the efficiency : of the transportation system to serve citizens, bu...
Development of a Conservative Model Validation Approach for Reliable Analysis
2015-01-01
CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account
Cut set-based risk and reliability analysis for arbitrarily interconnected networks
Wyss, Gregory D.
2000-01-01
Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.
Smith, Walter T., Jr.; Patterson, John M.
1984-01-01
Literature on analytical methods related to the functional groups of 17 chemical compounds is reviewed. These compounds include acids, acid azides, alcohols, aldehydes, ketones, amino acids, aromatic hydrocarbons, carbodiimides, carbohydrates, ethers, nitro compounds, nitrosamines, organometallic compounds, peroxides, phenols, silicon compounds,…
Automatic Functional Harmonic Analysis
de Haas, W.B.; Magalhães, J.P.; Wiering, F.; Veltkamp, R.C.
2013-01-01
Music scholars have been studying tonal harmony intensively for centuries, yielding numerous theories and models. Unfortunately, a large number of these theories are formulated in a rather informal fashion and lack mathematical precision. In this article we present HarmTrace, a functional model of
Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger
Chen Debiao; Yang Xinglin; Li Yuan; Li Jin
2012-01-01
High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)
Huibing Hao
2015-01-01
Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.
Chip-Hong Chang
2016-08-01
Full Text Available Static Random Access Memory (SRAM has recently been developed into a physical unclonable function (PUF for generating chip-unique signatures for hardware cryptography. The most compelling issue in designing a good SRAM-based PUF (SPUF is that while maximizing the mismatches between the transistors in the cross-coupled inverters improves the quality of the SPUF, this ironically also gives rise to increased memory read/write failures. For this reason, the memory cells of existing SPUFs cannot be reused as storage elements, which increases the overheads of cryptographic system where long signatures and high-density storage are both required. This paper presents a novel design methodology for dual-mode SRAM cell optimization. The design conflicts are resolved by using word-line voltage modulation, dynamic voltage scaling, negative bit-line and adaptive body bias techniques to compensate for reliability degradation due to transistor downsizing. The augmented circuit-level techniques expand the design space to achieve a good solution to fulfill several otherwise contradicting key design qualities for both modes of operation, as evinced by our statistical analysis and simulation results based on complementary metal–oxide–semiconductor (CMOS 45 nm bulk Predictive Technology Model.
IRRAS, Integrated Reliability and Risk Analysis System for PC
Russell, K.D.
1995-01-01
1 - Description of program or function: IRRAS4.16 is a program developed for the purpose of performing those functions necessary to create and analyze a complete Probabilistic Risk Assessment (PRA). This program includes functions to allow the user to create event trees and fault trees, to define accident sequences and basic event failure data, to solve system and accident sequence fault trees, to quantify cut sets, and to perform uncertainty analysis on the results. Also included in this program are features to allow the analyst to generate reports and displays that can be used to document the results of an analysis. Since this software is a very detailed technical tool, the user of this program should be familiar with PRA concepts and the methods used to perform these analyses. 2 - Method of solution: IRRAS4.16 is written entirely in MODULA-2 and uses an integrated commercial graphics package to interactively construct and edit fault trees. The fault tree solving methods used are industry recognized top down algorithms. For quantification, the program uses standard methods to propagate the failure information through the generated cut sets. 3 - Restrictions on the complexity of the problem: Due to the complexity of and the variety of ways a fault tree can be defined it is difficult to define limits on the complexity of the problem solved by this software. It is, however, capable of solving a substantial fault tree due to efficient methods. At this time, the software can efficiently solve problems as large as other software currently used on mainframe computers. Does not include source code
Reliability analysis of a sensitive and independent stabilometry parameter set.
Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.
Reliability analysis of a sensitive and independent stabilometry parameter set
Nagymáté, Gergely; Orlovits, Zsanett
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938
Analysis of NPP protection structure reliability under impact of a falling aircraft
Shul'man, G.S.
1996-01-01
Methodology for evaluation of NPP protection structure reliability by impact of aircraft fall down is considered. The methodology is base on the probabilistic analysis of all potential events. The problem is solved in three stages: determination of loads on structural units, calculation of local reliability of protection structures by assigned loads and estimation of the structure reliability. The methodology proposed may be applied at the NPP design stage and by determination of reliability of already available structures
Method of reliability allocation based on fault tree analysis and fuzzy math in nuclear power plants
Chen Zhaobing; Deng Jian; Cao Xuewu
2005-01-01
Reliability allocation is a kind of a difficult multi-objective optimization problem. It can not only be applied to determine the reliability characteristic of reactor systems, subsystem and main components but also be performed to improve the design, operation and maintenance of nuclear plants. The fuzzy math known as one of the powerful tools for fuzzy optimization and the fault analysis deemed to be one of the effective methods of reliability analysis can be applied to the reliability allocation model so as to work out the problems of fuzzy characteristic of some factors and subsystem's choice respectively in this paper. Thus we develop a failure rate allocation model on the basis of the fault tree analysis and fuzzy math. For the choice of the reliability constraint factors, we choose the six important ones according to practical need for conducting the reliability allocation. The subsystem selected by the top-level fault tree analysis is to avoid allocating reliability for all the equipment and components including the unnecessary parts. During the reliability process, some factors can be calculated or measured quantitatively while others only can be assessed qualitatively by the expert rating method. So we adopt fuzzy decision and dualistic contrast to realize the reliability allocation with the help of fault tree analysis. Finally the example of the emergency diesel generator's reliability allocation is used to illustrate reliability allocation model and improve this model simple and applicable. (authors)
Improvement of human reliability analysis method for PRA
Tanji, Junichi; Fujimoto, Haruo
2013-09-01
It is required to refine human reliability analysis (HRA) method by, for example, incorporating consideration for the cognitive process of operator into the evaluation of diagnosis errors and decision-making errors, as a part of the development and improvement of methods used in probabilistic risk assessments (PRAs). JNES has been developed a HRA method based on ATHENA which is suitable to handle the structured relationship among diagnosis errors, decision-making errors and operator cognition process. This report summarizes outcomes obtained from the improvement of HRA method, in which enhancement to evaluate how the plant degraded condition affects operator cognitive process and to evaluate human error probabilities (HEPs) which correspond to the contents of operator tasks is made. In addition, this report describes the results of case studies on the representative accident sequences to investigate the applicability of HRA method developed. HEPs of the same accident sequences are also estimated using THERP method, which is most popularly used HRA method, and comparisons of the results obtained using these two methods are made to depict the differences of these methods and issues to be solved. Important conclusions obtained are as follows: (1) Improvement of HRA method using operator cognitive action model. Clarification of factors to be considered in the evaluation of human errors, incorporation of degraded plant safety condition into HRA and investigation of HEPs which are affected by the contents of operator tasks were made to improve the HRA method which can integrate operator cognitive action model into ATHENA method. In addition, the detail of procedures of the improved method was delineated in the form of flowchart. (2) Case studies and comparison with the results evaluated by THERP method. Four operator actions modeled in the PRAs of representative BWR5 and 4-loop PWR plants were selected and evaluated as case studies. These cases were also evaluated using
Reliability And Maintenance Analysis Of CCTV Systems Used In Rail Transport
Siergiejczyk Mirosław
2015-11-01
Full Text Available CCTV systems are widely used across plethora of industrial areas including transport, where their function is to support transport telematics systems. Among others, they are used to ensure travel safety. This paper presented a reliability and maintenance analysis of CCTV. It led to building a relationships graph and then Chapman–Kolmogorov system of equations was derived to describe it. Drawing on those equations, relationships for calculating probability of system staying in state of full ability SPZ, state of the impendency over safety SZB1 as well as state of unreliability of safety SB were derived.
Elementary functional analysis
Shilov, Georgi E
1996-01-01
Introductory text covers basic structures of mathematical analysis (linear spaces, metric spaces, normed linear spaces, etc.), differential equations, orthogonal expansions, Fourier transforms - including problems in the complex domain, especially involving the Laplace transform - and more. Each chapter includes a set of problems, with hints and answers. Bibliography. 1974 edition.
Application of Metric-based Software Reliability Analysis to Example Software
Kim, Man Cheol; Smidts, Carol
2008-07-01
The software reliability of TELLERFAST ATM software is analyzed by using two metric-based software reliability analysis methods, a state transition diagram-based method and a test coverage-based method. The procedures for the software reliability analysis by using the two methods and the analysis results are provided in this report. It is found that the two methods have a relation of complementary cooperation, and therefore further researches on combining the two methods to reflect the benefit of the complementary cooperative effect to the software reliability analysis are recommended
Reliability analysis of pipelines under H2S environment as a part of ageing management
Santosh; Vinod, Gopika; Saraf, R.K.; Ghosh, A.K.; Kushwaha, H.S.
2006-01-01
Ageing management programme in a plant calls for the estimation of remaining life of the component. Reliability analysis methods using remaining life estimation models have found profound application in providing directives in ageing management programme. As a part of ageing management programme of H 2 S based heavy water plants, the remaining life estimation models are applied to the pipelines carrying H 2 S. The pipelines under H 2 S environment are more susceptible to the internal corrosion thereby reducing the pipeline's load carrying capacity. The objective of this study is to obtain the remaining life of pipelines under ageing due to internal corrosion. The ageing assessment of pipelines involves estimating the failure pressure of a pipeline and evaluating the failure surface equation. Several failure pressure models developed for assessing the pipeline's remaining strength due to internal corrosion were studied for this purpose. From the study, it was found that the modified B31G failure pressure model is most suitable for modeling the pipeline failure pressure. Due to the presence of non-linearity in the failure surface equation or limit state function and non-normal variables, the first order second moment method has been employed for carrying out the reliability analysis. The uncertainties of the random variables on which the limit state function depends are modeled using the probability distributions. The failure probabilities of the pipelines have been evaluated for the service lives of 15 and 25 years, which are the present life and designed life respectively. In addition, sensitivity analysis was carried out to identify the most important sensitive parameters in reliability analysis estimation. The paper highlights the application of these methodologies in the context of pipeline remaining life estimation with a suitable case study. (author)
Stress and Reliability Analysis of a Metal-Ceramic Dental Crown
Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.
1996-01-01
Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.
Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng
2016-01-01
The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.
Konnik, Mikhail V.
2012-04-01
Wavefront coding paradigm can be used not only for compensation of aberrations and depth-of-field improvement but also for an optical encryption. An optical convolution of the image with the PSF occurs when a diffractive optical element (DOE) with a known point spread function (PSF) is placed in the optical path. In this case, an optically encoded image is registered instead of the true image. Decoding of the registered image can be performed using standard digital deconvolution methods. In such class of optical-digital systems, the PSF of the DOE is used as an encryption key. Therefore, a reliability and cryptographic resistance of such an encryption method depends on the size and complexity of the PSF used for optical encoding. This paper gives a preliminary analysis on reliability and possible vulnerabilities of such an encryption method. Experimental results on brute-force attack on the optically encrypted images are presented. Reliability estimation of optical coding based on wavefront coding paradigm is evaluated. An analysis of possible vulnerabilities is provided.
Huang, Zhi-Hui; Tang, Ying-Chun; Dai, Kai
2016-05-01
Semiconductor materials and Product qualified rate are directly related to the manufacturing costs and survival of the enterprise. Application a dynamic reliability growth analysis method studies manufacturing execution system reliability growth to improve product quality. Refer to classical Duane model assumptions and tracking growth forecasts the TGP programming model, through the failure data, established the Weibull distribution model. Combining with the median rank of average rank method, through linear regression and least squares estimation method, match respectively weibull information fusion reliability growth curve. This assumption model overcome Duane model a weakness which is MTBF point estimation accuracy is not high, through the analysis of the failure data show that the method is an instance of the test and evaluation modeling process are basically identical. Median rank in the statistics is used to determine the method of random variable distribution function, which is a good way to solve the problem of complex systems such as the limited sample size. Therefore this method has great engineering application value.
Reliability Analysis Multiple Redundancy Controller for Nuclear Safety Systems
Son, Gwangseop; Kim, Donghoon; Son, Choulwoong
2013-01-01
This controller is configured for multiple modular redundancy (MMR) composed of dual modular redundancy (DMR) and triple modular redundancy (TMR). The architecture of MRC is briefly described, and the Markov model is developed. Based on the model, the reliability and Mean Time To Failure (MTTF) are analyzed. In this paper, the architecture of MRC for nuclear safety systems is described. The MRC is configured for multiple modular redundancy (MMR) composed of dual modular redundancy (DMR) and triple modular redundancy (TMR). Markov models for MRC architecture was developed, and then the reliability was analyzed by using the model. From the reliability analyses for the MRC, it is obtained that the failure rate of each module in the MRC should be less than 2 Χ 10 -4 /hour and the MTTF average increase rate depending on FCF increment, i. e. ΔMTTF/ΔFCF, is 4 months/0.1
Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns
Vaniachine, A; The ATLAS collaboration; Karpenko, D
2013-01-01
During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability...
Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns
Vaniachine, A; The ATLAS collaboration; Karpenko, D
2014-01-01
During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability...
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
Wilson, Stephen M; Eriksson, Dana K; Schneck, Sarah M; Lucanie, Jillian M
2018-01-01
This paper describes a quick aphasia battery (QAB) that aims to provide a reliable and multidimensional assessment of language function in about a quarter of an hour, bridging the gap between comprehensive batteries that are time-consuming to administer, and rapid screening instruments that provide limited detail regarding individual profiles of deficits. The QAB is made up of eight subtests, each comprising sets of items that probe different language domains, vary in difficulty, and are scored with a graded system to maximize the informativeness of each item. From the eight subtests, eight summary measures are derived, which constitute a multidimensional profile of language function, quantifying strengths and weaknesses across core language domains. The QAB was administered to 28 individuals with acute stroke and aphasia, 25 individuals with acute stroke but no aphasia, 16 individuals with chronic post-stroke aphasia, and 14 healthy controls. The patients with chronic post-stroke aphasia were tested 3 times each and scored independently by 2 raters to establish test-retest and inter-rater reliability. The Western Aphasia Battery (WAB) was also administered to these patients to assess concurrent validity. We found that all QAB summary measures were sensitive to aphasic deficits in the two groups with aphasia. All measures showed good or excellent test-retest reliability (overall summary measure: intraclass correlation coefficient (ICC) = 0.98), and excellent inter-rater reliability (overall summary measure: ICC = 0.99). Sensitivity and specificity for diagnosis of aphasia (relative to clinical impression) were 0.91 and 0.95 respectively. All QAB measures were highly correlated with corresponding WAB measures where available. Individual patients showed distinct profiles of spared and impaired function across different language domains. In sum, the QAB efficiently and reliably characterized individual profiles of language deficits.
Park, Joon-Ho; Schachar, Russell J
2011-01-01
Objective The purpose of the present study was to develop reliable and valid parent and teacher scales for measurement of functional impairment in children and adolescents in order to assist the diagnosis of attention-deficit/hyperactivity disorder (ADHD). Methods Seventy-two children with ADHD fulfilling the Diagnostic and Statistical Manual of Mental Disorder, 4th Edition criteria and forty-two normal controls were enrolled in this study. Parents and teachers of the subjects completed the parent and teacher form of the preliminary items of Child and Adolescent Functioning Impairment Scale (CAFIS) made up by the authors. Based on the reliability and factor analysis, the final parent (CAFIS-parent form) and teacher version (CAFIS-teacher form) were constructed. Scales were analyzed for reliability and validity. Relative operating characteristics curve was drawn to calculate the cutoff scores of these scales for children with ADHD. Results The CAFIS-parent and CAFIS-teacher forms consist of four and three factors, respectively. Internal consistency and test-retest correlation of the scales were satisfactory. The CAFIS and Children's Global Assessment Scale were significantly correlated. All scores of subscales of CAFIS in ADHD group were significantly higher than those of control group. The sensitivity and specificity of the subscales were mostly at an appropriate level. Conclusion The CAFIS is a brief layperson-administered scale to assess functional impairment of children and adolescents. It can be a useful tool for parents and teachers to objectively measure the functions of children at home and in school. This scale was found to be reliable and valid, and it appears to be a valuable instrument in Korean language. PMID:21852987
Long term reliability analysis of standby diesel generators
Winfield, D.J.
1988-01-01
The long term reliability of 11 diesel generators of 125 to 250 kV A size has been analysed from 26 years of data base information on individual diesel service as standby power supplies for the Chalk River research reactor facilities. Failure to start on demand and failure to run data is presented and failure by diesel subsystem and multiple failures are also analysed. A brief comparison is made with reliability studies of larger diesel generator units used for standby power service in nuclear power plants. (author)
Technical information report: Plasma melter operation, reliability, and maintenance analysis
Hendrickson, D.W.
1995-01-01
This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences
Embedded mechatronic systems 1 analysis of failures, predictive reliability
El Hami, Abdelkhalak
2015-01-01
In operation, mechatronics embedded systems are stressed by loads of different causes: climate (temperature, humidity), vibration, electrical and electromagnetic. These stresses in components which induce failure mechanisms should be identified and modeled for better control. AUDACE is a collaborative project of the cluster Mov'eo that address issues specific to mechatronic reliability embedded systems. AUDACE means analyzing the causes of failure of components of mechatronic systems onboard. The goal of the project is to optimize the design of mechatronic devices by reliability. The projec
Total Longitudinal Moment Calculation and Reliability Analysis of Yacht Structures
Zhi, Wenzheng; Lin, Shaofen
In order to check the reliability of the yacht in FRP (Fiber Reinforce Plastic) materials, in this paper, the vertical force and the calculation method of the overall longitudinal bending moment on yacht was analyzed. Specially, this paper focuses on the impact of speed on the still water bending moment on yacht. Then considering the mechanical properties of the cap type stiffeners in composite materials, the ultimate bearing capacity of the yacht has been worked out, finally the reliability of the yacht was calculated with using response surface methodology. The result can be used in yacht design and yacht driving.
Reliability modeling and analysis of smart power systems
Karki, Rajesh; Verma, Ajit Kumar
2014-01-01
The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti
Bruce Weaver
2014-09-01
Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.
Noble, Stephanie; Spann, Marisa N; Tokoglu, Fuyuze; Shen, Xilin; Constable, R Todd; Scheinost, Dustin
2017-11-01
Best practices are currently being developed for the acquisition and processing of resting-state magnetic resonance imaging data used to estimate brain functional organization-or "functional connectivity." Standards have been proposed based on test-retest reliability, but open questions remain. These include how amount of data per subject influences whole-brain reliability, the influence of increasing runs versus sessions, the spatial distribution of reliability, the reliability of multivariate methods, and, crucially, how reliability maps onto prediction of behavior. We collected a dataset of 12 extensively sampled individuals (144 min data each across 2 identically configured scanners) to assess test-retest reliability of whole-brain connectivity within the generalizability theory framework. We used Human Connectome Project data to replicate these analyses and relate reliability to behavioral prediction. Overall, the historical 5-min scan produced poor reliability averaged across connections. Increasing the number of sessions was more beneficial than increasing runs. Reliability was lowest for subcortical connections and highest for within-network cortical connections. Multivariate reliability was greater than univariate. Finally, reliability could not be used to improve prediction; these findings are among the first to underscore this distinction for functional connectivity. A comprehensive understanding of test-retest reliability, including its limitations, supports the development of best practices in the field. © The Author 2017. Published by Oxford University Press.
Reliability and validity of videotaped functional performance tests in ACL-injured subjects
von Porat, Anette; Holmström, Eva; Roos, Ewa
2008-01-01
BACKGROUND AND PURPOSE: In clinical practice, visual observation is often used to determine functional impairment and to evaluate treatment following a knee injury. The aim of this study was to evaluate the reliability and validity of observational assessments of knee movement pattern quality......, crossover hop on one leg and one-leg hop. The videos were observed by four physiotherapists, and the knee movement pattern quality, a feature of the loading strategy of the lower extremity, was scored on an 11-point rating scale. To assess the criterion validity, the observational rating was correlated...... obtained between the observers' assessment and knee flexion angle, r = 0.37-0.61. The crossover hop test or one-leg hop test was ranked as the most useful test in 172 of 192 occasions (90%) when assessing knee function. CONCLUSION: The moderate to good inter-observer reliability and the moderate criterion...
A reliable method for the stability analysis of structures ...
The detection of structural configurations with singular tangent stiffness matrix is essential because they can be unstable. The secondary paths, especially in unstable buckling, can play the most important role in the loss of stability and collapse of the structure. A new method for reliable detection and accurate computation of ...
Fiber Access Networks: Reliability Analysis and Swedish Broadband Market
Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp
Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.
Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems
Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte
2007-01-01
Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...
Windfarm generation assessment for reliability analysis of power systems
Negra, N.B.; Holmstrøm, O.; Bak-Jensen, B.
2007-01-01
Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...
A comparative reliability analysis of ETCS train radio communications
Hermanns, H.; Becker, B.; Jansen, D.N.; Damm, W.; Usenko, Y.S.; Fränzle, M.; Olderog, E.-R.; Podelski, A.; Wilhelm, R.
StoCharts have been proposed as a UML statechart extension for performance and dependability evaluation, and were applied in the context of train radio reliability assessment to show the principal tractability of realistic cases with this approach. In this paper, we extend on this bare feasibility
Architecture-Based Reliability Analysis of Web Services
Rahmani, Cobra Mariam
2012-01-01
In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…
Erratum: Comparative Analysis of Some Reliability Characteristics of ...
... are analyzed using kolmogorov's forward equation method. Comparisons are performed for specific values of system parameters. Finally, the configurations are ranked based on MTSF and ( AV(∞)) and the results show that configuration 3 is optimal. Keywords: Reliability, Availability, Deterioration, Repair, Replacement.
Reliability analysis of common hazardous waste treatment processes
Waters, R.D.
1993-05-01
Five hazardous waste treatment processes are analyzed probabilistically using Monte Carlo simulation to elucidate the relationships between process safety factors and reliability levels. The treatment processes evaluated are packed tower aeration, reverse osmosis, activated sludge, upflow anaerobic sludge blanket, and activated carbon adsorption
Reliability analysis of common hazardous waste treatment processes
Waters, Robert D. [Vanderbilt Univ., Nashville, TN (United States)
1993-05-01
Five hazardous waste treatment processes are analyzed probabilistically using Monte Carlo simulation to elucidate the relationships between process safety factors and reliability levels. The treatment processes evaluated are packed tower aeration, reverse osmosis, activated sludge, upflow anaerobic sludge blanket, and activated carbon adsorption.
Reliability of Computer Analysis of Electrocardiograms (ECG) of ...
Background: Computer programmes have been introduced to electrocardiography (ECG) with most physicians in Africa depending on computer interpretation of ECG. This study was undertaken to evaluate the reliability of computer interpretation of the 12-Lead ECG in the Black race. Methodology: Using the SCHILLER ...
Functional Generalized Structured Component Analysis.
Suk, Hye Won; Hwang, Heungsun
2016-12-01
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.
Reliability analysis of the epidural spinal cord compression scale.
Bilsky, Mark H; Laufer, Ilya; Fourney, Daryl R; Groff, Michael; Schmidt, Meic H; Varga, Peter Paul; Vrionis, Frank D; Yamada, Yoshiya; Gerszten, Peter C; Kuklo, Timothy R
2010-09-01
The evolution of imaging techniques, along with highly effective radiation options has changed the way metastatic epidural tumors are treated. While high-grade epidural spinal cord compression (ESCC) frequently serves as an indication for surgical decompression, no consensus exists in the literature about the precise definition of this term. The advancement of the treatment paradigms in patients with metastatic tumors for the spine requires a clear grading scheme of ESCC. The degree of ESCC often serves as a major determinant in the decision to operate or irradiate. The purpose of this study was to determine the reliability and validity of a 6-point, MR imaging-based grading system for ESCC. To determine the reliability of the grading scale, a survey was distributed to 7 spine surgeons who participate in the Spine Oncology Study Group. The MR images of 25 cervical or thoracic spinal tumors were distributed consisting of 1 sagittal image and 3 axial images at the identical level including T1-weighted, T2-weighted, and Gd-enhanced T1-weighted images. The survey was administered 3 times at 2-week intervals. The inter- and intrarater reliability was assessed. The inter- and intrarater reliability ranged from good to excellent when surgeons were asked to rate the degree of spinal cord compression using T2-weighted axial images. The T2-weighted images were superior indicators of ESCC compared with T1-weighted images with and without Gd. The ESCC scale provides a valid and reliable instrument that may be used to describe the degree of ESCC based on T2-weighted MR images. This scale accounts for recent advances in the treatment of spinal metastases and may be used to provide an ESCC classification scheme for multicenter clinical trial and outcome studies.
Brouillette, Robert M; Foil, Heather; Fontenot, Stephanie; Correro, Anthony; Allen, Ray; Martin, Corby K; Bruce-Keller, Annadora J; Keller, Jeffrey N
2013-01-01
While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST) in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, psmartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries of cognitive and motor function in the elderly is discussed.
Reliability and risk functions for structural components taking into account inspection results
Rackwitz, R.; Schall, G.
1989-01-01
The method of outcrossings has been shown to be efficient when calculating the failure probability of metallic structural components under ergodic Gaussian loading. Using Paris/Erdogan's crack growth law it is possible to develop a semi-analytical calculation model for both the reliability and the risk function. For numerical studies an approximate method of asymptotic nature is proposed. The same methodology also enables to incorporate inspection observations. (orig.) [de
Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models
Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.
1987-01-01
Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented
Reliability Analysis Study of Digital Reactor Protection System in Nuclear Power Plant
Guo, Xiao Ming; Liu, Tao; Tong, Jie Juan; Zhao, Jun
2011-01-01
The Digital I and C systems are believed to improve a plants safety and reliability generally. The reliability analysis of digital I and C system has become one research hotspot. Traditional fault tree method is one of means to quantify the digital I and C system reliability. Review of advanced nuclear power plant AP1000 digital protection system evaluation makes clear both the fault tree application and analysis process to the digital system reliability. One typical digital protection system special for advanced reactor has been developed, which reliability evaluation is necessary for design demonstration. The typical digital protection system construction is introduced in the paper, and the process of FMEA and fault tree application to the digital protection system reliability evaluation are described. Reliability data and bypass logic modeling are two points giving special attention in the paper. Because the factors about time sequence and feedback not exist in reactor protection system obviously, the dynamic feature of digital system is not discussed
Donath, Lars; Ludyga, Sebastian; Hammes, Daniel; Rossmeissl, Anja; Andergassen, Nadin; Zahner, Lukas; Faude, Oliver
2017-10-25
Aging is accompanied by a decline of executive function. Aerobic exercise training induces moderate improvements of cognitive domains (i.e., attention, processing, executive function, memory) in seniors. Most conclusive data are obtained from studies with dementia or cognitive impairment. Confident detection of exercise training effects requires adequate between-day reliability and low day-to-day variability obtained from acute studies, respectively. These absolute and relative reliability measures have not yet been examined for a single aerobic training session in seniors. Twenty-two healthy and physically active seniors (age: 69 ± 3 y, BMI: 24.8 ± 2.2, VO 2peak : 32 ± 6 mL/kg/bodyweight) were enrolled in this randomized controlled cross-over study. A repeated between-day comparison [i.e., day 1 (habituation) vs. day 2 & day 2 vs. day 3] of executive function testing (Eriksen-Flanker-Test, Stroop-Color-Test, Digit-Span, Five-Point-Test) before and after aerobic cycling exercise at 70% of the heart rate reserve [0.7 × (HR max - HR rest )] was conducted. Reliability measures were calculated for pre, post and change scores. Large between-day differences between day 1 and 2 were found for reaction times (Flanker- and Stroop Color testing) and completed figures (Five-Point test) at pre and post testing (0.002 seniors can be presumed.
Mills, Tamara L; Holm, Margo B; Schmeler, Mark
2007-01-01
The purpose of this study was to establish the test-retest reliability and content validity of an outcomes tool designed to measure the effectiveness of seating-mobility interventions on the functional performance of individuals who use wheelchairs or scooters as their primary seating-mobility device. The instrument, Functioning Everyday With a Wheelchair (FEW), is a questionnaire designed to measure perceived user function related to wheelchair/scooter use. Using consumer-generated items, FEW Beta Version 1.0 was developed and test-retest reliability was established. Cross-validation of FEW Beta Version 1.0 was then carried out with five samples of seating-mobility users to establish content validity. Based on the content validity study, FEW Version 2.0 was developed and administered to seating-mobility consumers to examine its test-retest reliability. FEW Beta Version 1.0 yielded an intraclass correlation coefficient (ICC) Model (3,k) of .92, p content validity results revealed that FEW Beta Version 1.0 captured 55% of seating-mobility goals reported by consumers across five samples. FEW Version 2.0 yielded ICC(3,k) = .86, p content validity of FEW Version 2.0 was confirmed. FEW Beta Version 1.0 and FEW Version 2.0 were highly stable in their measurement of participants' seating-mobility goals over a 1-week interval.
Aertssen, W F M; Steenbergen, B; Smits-Engelsman, B C M
2018-06-07
There is lack of valid and reliable field-based tests for assessing functional strength in young children with mild intellectual disabilities (IDs). The aim of this study was to investigate the test-retest reliability and construct validity of the Functional Strength Measurement in children with ID (FSM-ID). Fifty-two children with mild ID (40 boys and 12 girls, mean age 8.48 years, SD = 1.48) were tested with the FSM. Test-retest reliability (n = 32) was examined by a two-way interclass correlation coefficient for agreement (ICC 2.1A). Standard error of measurement and smallest detectable change were calculated. Construct validity was determined by calculating correlations between the FSM-ID and handheld dynamometry (HHD) (convergent validity), FSM-ID, FSM-ID and subtest strength of the Bruininks-Oseretsky test of motor proficiency - second edition (BOT-2) (convergent validity) and the FSM-ID and balance subtest of the BOT-2 (discriminant validity). Test-retest reliability ICC ranged 0.89-0.98. Correlation between the items of the FSM-ID and HHD ranged 0.39-0.79 and between FSM-ID and BOT-2 (strength items) 0.41-0.80. Correlation between items of the FSM-ID and BOT-2 (balance items) ranged 0.41-0.70. The FSM-ID showed good test-retest reliability and good convergent validity with the HHD and BOT-2 subtest strength. The correlations assessing discriminant validity were higher than expected. Poor levels of postural control and core stability in children with mild IDs may be the underlying factor of those higher correlations. © 2018 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Hatamvand S
2016-07-01
Full Text Available Impairment of cervicocephalic and head joint position sense has an important role in the recurrent and chronic of cervicocephalic pain. The various tools are suggested for evaluating the cervicocephalic joint position sense. Although reconstruction of cervical angle is a clinical criterion for measuring the cervicocephalic proprioception, the reliability of this method has not been completely accepted. The purpose of this study was to evaluate intra-rater reliability of cervical sensory motor function and cervical reconstruction test in healthy subjects. twenty four healthy subjects (25.70±6.08 y through simple non-probability sampling participated in this single-group repeatedmeasures reliability study. Participants were asked to relocate the neck, as accurately as possible, after full active cervical flexion, extension and rotation to the left and right sides. Five trials were performed for each movement. Laser pointer was used in head of patient. The distance between zero spot and joint position which patient had been reconstructed, was measured by centimeter. Intra-class correlation Coefficient (ICCs and Pearson's correlation coefficient test was used to determine intra-rater reliability of variables. The results showed that intra-class correlation Coefficient (ICCs values with 95% confidence interval (CI and the standard error of the measurement (SEM were good to excellent agreement for a single investigator between measurement occasions. Intra-class correlation Coefficient (ICCs values were obtained for flexion movement (ICCs:0.75, good, extension movement (ICCs:0.81, very good, right rotation (ICCs:0.64, good and left rotation (ICCs:0.64, good. The cervicocephalic relocation test to neutral head position by laser pointer is a reliable method to measure cervical sensory motor function. Therefore, it can be used for evaluating cervicocephalic proprioception of patient with cervicocephalic pain.
Modeling the bathtub shape hazard rate function in terms of reliability
Wang, K.S.; Hsu, F.S.; Liu, P.P.
2002-01-01
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons
Work-related measures of physical and behavioral health function: Test-retest reliability.
Marino, Molly Elizabeth; Meterko, Mark; Marfeo, Elizabeth E; McDonough, Christine M; Jette, Alan M; Ni, Pengsheng; Bogusz, Kara; Rasch, Elizabeth K; Brandt, Diane E; Chan, Leighton
2015-10-01
The Work Disability Functional Assessment Battery (WD-FAB), developed for potential use by the US Social Security Administration to assess work-related function, currently consists of five multi-item scales assessing physical function and four multi-item scales assessing behavioral health function; the WD-FAB scales are administered as Computerized Adaptive Tests (CATs). The goal of this study was to evaluate the test-retest reliability of the WD-FAB Physical Function and Behavioral Health CATs. We administered the WD-FAB scales twice, 7-10 days apart, to a sample of 376 working age adults and 316 adults with work-disability. Intraclass correlation coefficients were calculated to measure the consistency of the scores between the two administrations. Standard error of measurement (SEM) and minimal detectable change (MDC90) were also calculated to measure the scales precision and sensitivity. For the Physical Function CAT scales, the ICCs ranged from 0.76 to 0.89 in the working age adult sample, and 0.77-0.86 in the sample of adults with work-disability. ICCs for the Behavioral Health CAT scales ranged from 0.66 to 0.70 in the working age adult sample, and 0.77-0.80 in the adults with work-disability. The SEM ranged from 3.25 to 4.55 for the Physical Function scales and 5.27-6.97 for the Behavioral Health function scales. For all scales in both samples, the MDC90 ranged from 7.58 to 16.27. Both the Physical Function and Behavioral Health CATs of the WD-FAB demonstrated good test-retest reliability in adults with work-disability and general adult samples, a critical requirement for assessing work related functioning in disability applicants and in other contexts. Copyright © 2015 Elsevier Inc. All rights reserved.
Screening, sensitivity, and uncertainty for the CREAM method of Human Reliability Analysis
Bedford, Tim; Bayley, Clare; Revie, Matthew
2013-01-01
This paper reports a sensitivity analysis of the Cognitive Reliability and Error Analysis Method for Human Reliability Analysis. We consider three different aspects: the difference between the outputs of the Basic and Extended methods, on the same HRA scenario; the variability in outputs through the choices made for common performance conditions (CPCs); and the variability in outputs through the assignment of choices for cognitive function failures (CFFs). We discuss the problem of interpreting categories when applying the method, compare its quantitative structure to that of first generation methods and discuss also how dependence is modelled with the approach. We show that the control mode intervals used in the Basic method are too narrow to be consistent with the Extended method. This motivates a new screening method that gives improved accuracy with respect to the Basic method, in the sense that (on average) halves the uncertainty associated with the Basic method. We make some observations on the design of a screening method that are generally applicable in Risk Analysis. Finally, we propose a new method of combining CPC weights with nominal probabilities so that the calculated probabilities are always in range (i.e. between 0 and 1), while satisfying sensible properties that are consistent with the overall CREAM method
Development of RBDGG Solver and Its Application to System Reliability Analysis
Kim, Man Cheol
2010-01-01
For the purpose of making system reliability analysis easier and more intuitive, RBDGG (Reliability Block diagram with General Gates) methodology was introduced as an extension of the conventional reliability block diagram. The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system, and therefore the modeling of a system for system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar with that of the development of the RGGG (Reliability Graph with General Gates) methodology, which is an extension of a conventional reliability graph. The newly proposed methodology is now implemented into a software tool, RBDGG Solver. RBDGG Solver was developed as a WIN32 console application. RBDGG Solver receives information on the failure modes and failure probabilities of each component in the system, along with the connection structure and connection logics among the components in the system. Based on the received information, RBDGG Solver automatically generates a system reliability analysis model for the system, and then provides the analysis results. In this paper, application of RBDGG Solver to the reliability analysis of an example system, and verification of the calculation results are provided for the purpose of demonstrating how RBDGG Solver is used for system reliability analysis
Techniques for developing reliable and functional materials control and accounting software
Barlich, G.
1988-01-01
The media has increasingly focused on failures of computer systems resulting in financial, material, and other losses and on systems failing to function as advertised. Unfortunately, such failures with equally disturbing losses are possible in computer systems providing materials control and accounting (MCandA) functions. Major improvements in the reliability and correctness of systems are possible with disciplined design and development techniques applied during software development. This paper describes some of the techniques used in the Safeguard Systems Group at Los Alamos National Laboratory for various MCandA systems
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Reliability evaluation of nuclear power plants by fault tree analysis
Iwao, H.; Otsuka, T.; Fujita, I.
1993-01-01
As a work sponsored by the Ministry of International Trade and Industry, the Safety Information Research Center of NUPEC, using reliability data based on the operational experience of the domestic LWR Plants, has implemented FTA for the standard PWRs and BWRs in Japan with reactor scram due to system failures being at the top event. Up to this point, we have obtained the FT chart and minimal cut set for each type of system failure for qualitative evaluation, and we have estimated system unavailability, Fussell-Vesely importance and risk worth for components for quantitative evaluation. As the second stage of a series in our reliability evaluation work, another program was started to establish a support system. The aim of this system is to assist foreign and domestic plants in creating countermeasures when incidents occur, by providing them with the necessary information using the above analytical method and its results. (author)
Reliability Analysis of Timber Structures through NDT Data Upgrading
Sousa, Hélder; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning
The first part of this document presents, in chapter 2, a description of timber characteristics and common used NDT and MDT for timber elements. Stochastic models for timber properties and damage accumulation models are also referred. According to timber’s properties a framework is proposed...... for a safety reassessment procedure. For that purpose a theoretical background for structural reliability assessment including probabilistic concepts for structural systems and stochastic models are given in chapter 3. System models, both series and parallel systems, are presented as well as methods...... for robustness are dealt in chapter 5. The second part of this document begins in chapter 6, where a practical application of the premise definitions and methodologies is given through the implementation of upgraded models with NDT and MDT data. Structural life-cycle is, therefore, assessed and reliability...
Reliability analysis of numerical simulation in near field behavior
Kobayashi, Akira; Yamamoto, Kiyohito; Chijimatsu, Masakazu; Fujita, Tomoo
2008-01-01
The uncertainties of the boundary conditions, the elastic modulus and Poisson's ratio on the mechanical behavior at near field of high level radioactive waste repository were examined. The method used to examine the error propagation was the first order second moment method. The reliability of the maximum principal stress, maximum shear stress at crown of the tunnel and the minimum principal stress at spring line was examined for one million years. For elastic model, the reliability of the maximum shear stress gradually decreased while that of the maximum principle stress increased. That of the minimum principal stress was relatively low for one million years. This tendency was similar to that from the damage model. (author)
Reliability Analysis of Public Survey in Satisfaction with Nuclear Safety
Park, Moon Soo; Moon, Joo Hyun; Kang, Chang Sun [Seoul National Univ., Seoul (Korea, Republic of)
2005-07-01
Korea Institute of Nuclear Safety (KINS) carried out a questionnaire survey on public's understanding nuclear safety and regulation in order to grasp public acceptance for nuclear energy. The survey was planned to help to analyze public opinion on nuclear energy and provide basic data for advertising strategy and policy development. In this study, based on results of the survey, the reliability of the survey was evaluated according to each nuclear site.
Reliability analysis of Airbus A-330 computer flight management system
Fajmut, Metod
2010-01-01
Diploma thesis deals with digitized, computerized flight control system »Fly-by-wire« and security aspects of the computer system of an aircraft Airbus A330. As for space and military aircraft structures is also in commercial airplanes, much of the financial contribution devoted to reliability. Conventional aircraft control systems have, and some are still, to rely on mechanical and hydraulic connections between the controls on aircraft operated by the pilot and control surfaces. But newer a...
Reliability Analysis of Public Survey in Satisfaction with Nuclear Safety
Park, Moon Soo; Moon, Joo Hyun; Kang, Chang Sun
2005-01-01
Korea Institute of Nuclear Safety (KINS) carried out a questionnaire survey on public's understanding nuclear safety and regulation in order to grasp public acceptance for nuclear energy. The survey was planned to help to analyze public opinion on nuclear energy and provide basic data for advertising strategy and policy development. In this study, based on results of the survey, the reliability of the survey was evaluated according to each nuclear site
Inter-rater reliability of three standardized functional tests in patients with low back pain
Tidstrand, Johan; Horneij, Eva
2009-01-01
Background Of all patients with low back pain, 85% are diagnosed as "non-specific lumbar pain". Lumbar instability has been described as one specific diagnosis which several authors have described as delayed muscular responses, impaired postural control as well as impaired muscular coordination among these patients. This has mostly been measured and evaluated in a laboratory setting. There are few standardized and evaluated functional tests, examining functional muscular coordination which are also applicable in the non-laboratory setting. In ordinary clinical work, tests of functional muscular coordination should be easy to apply. The aim of this present study was to therefore standardize and examine the inter-rater reliability of three functional tests of muscular functional coordination of the lumbar spine in patients with low back pain. Methods Nineteen consecutive individuals, ten men and nine women were included. (Mean age 42 years, SD ± 12 yrs). Two independent examiners assessed three tests: "single limb stance", "sitting on a Bobath ball with one leg lifted" and "unilateral pelvic lift" on the same occasion. The standardization procedure took altered positions of the spine or pelvis and compensatory movements of the free extremities into account. The inter-rater reliability was analyzed by Cohen's kappa coefficient (κ) and by percentage agreement. Results The inter-rater reliability for the right and the left leg respectively was: for the single limb stance very good (κ: 0.88–1.0), for sitting on a Bobath ball good (κ: 0.79) and very good (κ: 0.88) and for the unilateral pelvic lift: good (κ: 0.61) and moderate (κ: 0.47). Conclusion The present study showed good to very good inter-rater reliability for two standardized tests, that is, the single-limb stance and sitting on a Bobath-ball with one leg lifted. Inter-rater reliability for the unilateral pelvic lift test was moderate to good. Validation of the tests in their ability to evaluate lumbar
Inter-rater reliability of three standardized functional tests in patients with low back pain
Tidstrand Johan
2009-06-01
Full Text Available Abstract Background Of all patients with low back pain, 85% are diagnosed as "non-specific lumbar pain". Lumbar instability has been described as one specific diagnosis which several authors have described as delayed muscular responses, impaired postural control as well as impaired muscular coordination among these patients. This has mostly been measured and evaluated in a laboratory setting. There are few standardized and evaluated functional tests, examining functional muscular coordination which are also applicable in the non-laboratory setting. In ordinary clinical work, tests of functional muscular coordination should be easy to apply. The aim of this present study was to therefore standardize and examine the inter-rater reliability of three functional tests of muscular functional coordination of the lumbar spine in patients with low back pain. Methods Nineteen consecutive individuals, ten men and nine women were included. (Mean age 42 years, SD ± 12 yrs. Two independent examiners assessed three tests: "single limb stance", "sitting on a Bobath ball with one leg lifted" and "unilateral pelvic lift" on the same occasion. The standardization procedure took altered positions of the spine or pelvis and compensatory movements of the free extremities into account. The inter-rater reliability was analyzed by Cohen's kappa coefficient (κ and by percentage agreement. Results The inter-rater reliability for the right and the left leg respectively was: for the single limb stance very good (κ: 0.88–1.0, for sitting on a Bobath ball good (κ: 0.79 and very good (κ: 0.88 and for the unilateral pelvic lift: good (κ: 0.61 and moderate (κ: 0.47. Conclusion The present study showed good to very good inter-rater reliability for two standardized tests, that is, the single-limb stance and sitting on a Bobath-ball with one leg lifted. Inter-rater reliability for the unilateral pelvic lift test was moderate to good. Validation of the tests in their
Launch and Assembly Reliability Analysis for Human Space Exploration Missions
Cates, Grant; Gelito, Justin; Stromgren, Chel; Cirillo, William; Goodliff, Kandyce
2012-01-01
NASA's future human space exploration strategy includes single and multi-launch missions to various destinations including cis-lunar space, near Earth objects such as asteroids, and ultimately Mars. Each campaign is being defined by Design Reference Missions (DRMs). Many of these missions are complex, requiring multiple launches and assembly of vehicles in orbit. Certain missions also have constrained departure windows to the destination. These factors raise concerns regarding the reliability of launching and assembling all required elements in time to support planned departure. This paper describes an integrated methodology for analyzing launch and assembly reliability in any single DRM or set of DRMs starting with flight hardware manufacturing and ending with final departure to the destination. A discrete event simulation is built for each DRM that includes the pertinent risk factors including, but not limited to: manufacturing completion; ground transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to trans-destination-injection. Each reliability factor can be selectively activated or deactivated so that the most critical risk factors can be identified. This enables NASA to prioritize mitigation actions so as to improve mission success.
Transform analysis of generalized functions
Misra, O P
1986-01-01
Transform Analysis of Generalized Functions concentrates on finite parts of integrals, generalized functions and distributions. It gives a unified treatment of the distributional setting with transform analysis, i.e. Fourier, Laplace, Stieltjes, Mellin, Hankel and Bessel Series.Included are accounts of applications of the theory of integral transforms in a distributional setting to the solution of problems arising in mathematical physics. Information on distributional solutions of differential, partial differential equations and integral equations is conveniently collected here.The volume will
Reliability modeling and analysis for a novel design of modular converter system of wind turbines
Zhang, Cai Wen; Zhang, Tieling; Chen, Nan; Jin, Tongdan
2013-01-01
Converters play a vital role in wind turbines. The concept of modularity is gaining in popularity in converter design for modern wind turbines in order to achieve high reliability as well as cost-effectiveness. In this study, we are concerned with a novel topology of modular converter invented by Hjort, Modular converter system with interchangeable converter modules. World Intellectual Property Organization, Pub. No. WO29027520 A2; 5 March 2009, in this architecture, the converter comprises a number of identical and interchangeable basic modules. Each module can operate in either AC/DC or DC/AC mode, depending on whether it functions on the generator or the grid side. Moreover, each module can be reconfigured from one side to the other, depending on the system’s operational requirements. This is a shining example of full-modular design. This paper aims to model and analyze the reliability of such a modular converter. A Markov modeling approach is applied to the system reliability analysis. In particular, six feasible converter system models based on Hjort’s architecture are investigated. Through numerical analyses and comparison, we provide insights and guidance for converter designers in their decision-making.
Prins Martin H
2009-03-01
Full Text Available Abstract Background Disease severity and functional impairment in patients with intermittent claudication is usually quantified by the measurement of pain-free walking distance (intermittent claudication distance, ICD and maximal walking distance (absolute claudication distance, ACD. However, the distance at which a patient would prefer to stop because of claudication pain seems a definition that is more correspondent with the actual daily life walking distance. We conducted a study in which the distance a patient prefers to stop was defined as the functional claudication distance (FCD, and estimated the reliability and validity of this measurement. Methods In this clinical validity study we included patients with intermittent claudication, following a supervised exercise therapy program. The first study part consisted of two standardised treadmill tests. During each test ICD, FCD and ACD were determined. Primary endpoint was the reliability as represented by the calculated intra-class correlation coefficients. In the second study part patients performed a standardised treadmill test and filled out the Rand-36 questionnaire. Spearman's rho was calculated to assess validity. Results The intra-class correlation coefficients of ICD, FCD and ACD were 0.940, 0.959, and 0.975 respectively. FCD correlated significantly with five out of nine domains, namely physical function (rho = 0.571, physical role (rho = 0.532, vitality (rho = 0.416, pain (rho = 0.416 and health change (rho = 0.414. Conclusion FCD is a reliable and valid measurement for determining functional capacity in trained patients with intermittent claudication. Furthermore it seems that FCD better reflects the actual functional impairment. In future studies, FCD could be used alongside ICD and ACD.