WorldWideScience

Sample records for model providing reliable

  1. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  2. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  3. Will British weather provide reliable electricity?

    International Nuclear Information System (INIS)

    Oswald, James; Raine, Mike; Ashraf-Ball, Hezlin

    2008-01-01

    There has been much academic debate on the ability of wind to provide a reliable electricity supply. The model presented here calculates the hourly power delivery of 25 GW of wind turbines distributed across Britain's grid, and assesses power delivery volatility and the implications for individual generators on the system. Met Office hourly wind speed data are used to determine power output and are calibrated using Ofgem's published wind output records. There are two main results. First, the model suggests that power swings of 70% within 12 h are to be expected in winter, and will require individual generators to go on or off line frequently, thereby reducing the utilisation and reliability of large centralised plants. These reductions will lead to increases in the cost of electricity and reductions in potential carbon savings. Secondly, it is shown that electricity demand in Britain can reach its annual peak with a simultaneous demise of wind power in Britain and neighbouring countries to very low levels. This significantly undermines the case for connecting the UK transmission grid to neighbouring grids. Recommendations are made for improving 'cost of wind' calculations. The authors are grateful for the sponsorship provided by The Renewable Energy Foundation

  4. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  5. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  6. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  7. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  8. Plant and control system reliability and risk model

    International Nuclear Information System (INIS)

    Niemelae, I.M.

    1986-01-01

    A new reliability modelling technique for control systems and plants is demonstrated. It is based on modified boolean algebra and it has been automated into an efficient computer code called RELVEC. The code is useful for getting an overall view of the reliability parameters or for an in-depth reliability analysis, which is essential in risk analysis, where the model must be capable of answering to specific questions like: 'What is the probability of this temperature limiter to provide a false alarm', or 'what is the probability of air pressure in this subsystem to drop below lower limit'. (orig./DG)

  9. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  10. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  11. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  12. Transparent reliability model for fault-tolerant safety systems

    International Nuclear Information System (INIS)

    Bodsberg, Lars; Hokstad, Per

    1997-01-01

    A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability

  13. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  14. Modeling of system reliability Petri nets with aging tokens

    International Nuclear Information System (INIS)

    Volovoi, V.

    2004-01-01

    The paper addresses the dynamic modeling of degrading and repairable complex systems. Emphasis is placed on the convenience of modeling for the end user, with special attention being paid to the modeling part of a problem, which is considered to be decoupled from the choice of solution algorithms. Depending on the nature of the problem, these solution algorithms can include discrete event simulation or numerical solution of the differential equations that govern underlying stochastic processes. Such modularity allows a focus on the needs of system reliability modeling and tailoring of the modeling formalism accordingly. To this end, several salient features are chosen from the multitude of existing extensions of Petri nets, and a new concept of aging tokens (tokens with memory) is introduced. The resulting framework provides for flexible and transparent graphical modeling with excellent representational power that is particularly suited for system reliability modeling with non-exponentially distributed firing times. The new framework is compared with existing Petri-net approaches and other system reliability modeling techniques such as reliability block diagrams and fault trees. The relative differences are emphasized and illustrated with several examples, including modeling of load sharing, imperfect repair of pooled items, multiphase missions, and damage-tolerant maintenance. Finally, a simple implementation of the framework using discrete event simulation is described

  15. PROVIDING RELIABILITY OF HUMAN RESOURCES IN PRODUCTION MANAGEMENT PROCESS

    Directory of Open Access Journals (Sweden)

    Anna MAZUR

    2014-07-01

    Full Text Available People are the most valuable asset of an organization and the results of a company mostly depends on them. The human factor can also be a weak link in the company and cause of the high risk for many of the processes. Reliability of the human factor in the process of the manufacturing process will depend on many factors. The authors include aspects of human error, safety culture, knowledge, communication skills, teamwork and leadership role in the developed model of reliability of human resources in the management of the production process. Based on the case study and the results of research and observation of the author present risk areas defined in a specific manufacturing process and the results of evaluation of the reliability of human resources in the process.

  16. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  17. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  18. Multi-state reliability for coolant pump based on dependent competitive failure model

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2013-01-01

    By taking into account the effect of degradation due to internal vibration and external shocks. and based on service environment and degradation mechanism of nuclear power plant coolant pump, a multi-state reliability model of coolant pump was proposed for the system that involves competitive failure process between shocks and degradation. Using this model, degradation state probability and system reliability were obtained under the consideration of internal vibration and external shocks for the degraded coolant pump. It provided an effective method to reliability analysis for coolant pump in nuclear power plant based on operating environment. The results can provide a decision making basis for design changing and maintenance optimization. (authors)

  19. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  20. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  1. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  2. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  3. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  4. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  5. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  6. Reliability modeling and analysis of smart power systems

    CERN Document Server

    Karki, Rajesh; Verma, Ajit Kumar

    2014-01-01

    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  7. System Statement of Tasks of Calculating and Providing the Reliability of Heating Cogeneration Plants in Power Systems

    Science.gov (United States)

    Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.

    2018-01-01

    A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.

  8. Models on reliability of non-destructive testing

    International Nuclear Information System (INIS)

    Simola, K.; Pulkkinen, U.

    1998-01-01

    The reliability of ultrasonic inspections has been studied in e.g. international PISC (Programme for the Inspection of Steel Components) exercises. These exercises have produced a large amount of information on the effect of various factors on the reliability of inspections. The information obtained from reliability experiments are used to model the dependency of flaw detection probability on various factors and to evaluate the performance of inspection equipment, including the sizing accuracy. The information from experiments is utilised in a most effective way when mathematical models are applied. Here, some statistical models for reliability of non-destructive tests are introduced. In order to demonstrate the use of inspection reliability models, they have been applied to the inspection results of intergranular stress corrosion cracking (IGSCC) type flaws in PISC III exercise (PISC 1995). The models are applied to both flaw detection frequency data of all inspection teams and to flaw sizing data of one participating team. (author)

  9. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  10. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  11. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  12. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  13. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  14. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    Science.gov (United States)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers

  15. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  16. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  17. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    Science.gov (United States)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  18. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  19. Reliability models for Space Station power system

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.

    1987-01-01

    This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.

  20. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  1. Reliability Model of Power Transformer with ONAN Cooling

    OpenAIRE

    M. Sefidgaran; M. Mirzaie; A. Ebrahimzadeh

    2010-01-01

    Reliability of a power system is considerably influenced by its equipments. Power transformers are one of the most critical and expensive equipments of a power system and their proper functions are vital for the substations and utilities. Therefore, reliability model of power transformer is very important in the risk assessment of the engineering systems. This model shows the characteristics and functions of a transformer in the power system. In this paper the reliability model...

  2. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  3. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  4. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  5. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  6. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  7. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  8. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  9. Reliability Measure Model for Assistive Care Loop Framework Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Venki Balasubramanian

    2010-01-01

    Full Text Available Body area wireless sensor networks (BAWSNs are time-critical systems that rely on the collective data of a group of sensor nodes. Reliable data received at the sink is based on the collective data provided by all the source sensor nodes and not on individual data. Unlike conventional reliability, the definition of retransmission is inapplicable in a BAWSN and would only lead to an elapsed data arrival that is not acceptable for time-critical application. Time-driven applications require high data reliability to maintain detection and responses. Hence, the transmission reliability for the BAWSN should be based on the critical time. In this paper, we develop a theoretical model to measure a BAWSN's transmission reliability, based on the critical time. The proposed model is evaluated through simulation and then compared with the experimental results conducted in our existing Active Care Loop Framework (ACLF. We further show the effect of the sink buffer in transmission reliability after a detailed study of various other co-existing parameters.

  10. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  11. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  12. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  13. A possibilistic uncertainty model in classical reliability theory

    International Nuclear Information System (INIS)

    De Cooman, G.; Capelle, B.

    1994-01-01

    The authors argue that a possibilistic uncertainty model can be used to represent linguistic uncertainty about the states of a system and of its components. Furthermore, the basic properties of the application of this model to classical reliability theory are studied. The notion of the possibilistic reliability of a system or a component is defined. Based on the concept of a binary structure function, the important notion of a possibilistic function is introduced. It allows to calculate the possibilistic reliability of a system in terms of the possibilistic reliabilities of its components

  14. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  15. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  16. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-01-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  17. Reliability Modeling of Electromechanical System with Meta-Action Chain Methodology

    Directory of Open Access Journals (Sweden)

    Genbao Zhang

    2018-01-01

    Full Text Available To establish a more flexible and accurate reliability model, the reliability modeling and solving algorithm based on the meta-action chain thought are used in this thesis. Instead of estimating the reliability of the whole system only in the standard operating mode, this dissertation adopts the structure chain and the operating action chain for the system reliability modeling. The failure information and structure information for each component are integrated into the model to overcome the given factors applied in the traditional modeling. In the industrial application, there may be different operating modes for a multicomponent system. The meta-action chain methodology can estimate the system reliability under different operating modes by modeling the components with varieties of failure sensitivities. This approach has been identified by computing some electromechanical system cases. The results indicate that the process could improve the system reliability estimation. It is an effective tool to solve the reliability estimation problem in the system under various operating modes.

  18. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  19. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  20. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals.

    Science.gov (United States)

    Zia, Jasmine; Chung, Chia-Fang; Xu, Kaiyuan; Dong, Yi; Schenk, Jeanette M; Cain, Kevin; Munson, Sean; Heitkemper, Margaret M

    2017-11-04

    There are currently no standardized methods for identifying trigger food(s) from irritable bowel syndrome (IBS) food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers' interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber) were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff's α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07). Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s) (range 3-7) to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers.

  1. Structural reliability analysis under evidence theory using the active learning kriging model

    Science.gov (United States)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  2. Reliability and Maintainability model (RAM) user and maintenance manual. Part 2

    Science.gov (United States)

    Ebeling, Charles E.

    1995-01-01

    This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.

  3. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  4. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  5. Stochastic reliability and maintenance modeling essays in honor of Professor Shunji Osaki on his 70th birthday

    CERN Document Server

    Nakagawa, Toshio

    2013-01-01

    In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management wo...

  6. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals

    Directory of Open Access Journals (Sweden)

    Jasmine Zia

    2017-11-01

    Full Text Available There are currently no standardized methods for identifying trigger food(s from irritable bowel syndrome (IBS food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers’ interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff’s α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07. Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s (range 3–7 to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers.

  7. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  8. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  9. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  10. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, we use an IFR distribution to develop a reliability model for the EBS

  11. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, an IFR distribution is used to develop a reliability model for the EBS

  12. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  13. Using the Weibull distribution reliability, modeling and inference

    CERN Document Server

    McCool, John I

    2012-01-01

    Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution

  14. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  15. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  16. Role of frameworks, models, data, and judgment in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hannaman, G W

    1986-05-01

    Many advancements in the methods for treating human interactions in PRA studies have occurred in the last decade. These advancements appear to increase the capability of PRAs to extend beyond just the assessment of the human's importance to safety. However, variations in the application of these advanced models, data, and judgements in recent PRAs make quantitative comparisons among studies extremely difficult. This uncertainty in the analysis diminishes the usefulness of the PRA study for upgrading procedures, enhancing traning, simulator design, technical specification guidance, and for aid in designing the man-machine interface. Hence, there is a need for a framework to guide analysts in incorporating human interactions into the PRA systems analyses so that future users of a PRA study will have a clear understanding of the approaches, models, data, and assumptions which were employed in the initial study. This paper describes the role of the systematic human action reliability procedure (SHARP) in providing a road map through the complex terrain of human reliability that promises to improve the reproducibility of such analysis in the areas of selecting the models, data, representations, and assumptions. Also described is the role that a human cognitive reliability model can have in collecting data from simulators and helping analysts assign human reliability parameters in a PRA study. Use of these systematic approaches to perform or upgrade existing PRAs promises to make PRA studies more useful as risk management tools.

  17. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  18. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  19. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  20. Reliability Modeling of Double Beam Bridge Crane

    Science.gov (United States)

    Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li

    2018-05-01

    This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.

  1. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  2. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  3. Standardized Patients Provide a Reliable Assessment of Athletic Training Students' Clinical Skills

    Science.gov (United States)

    Armstrong, Kirk J.; Jarriel, Amanda J.

    2016-01-01

    Context: Providing students reliable objective feedback regarding their clinical performance is of great value for ongoing clinical skill assessment. Since a standardized patient (SP) is trained to consistently portray the case, students can be assessed and receive immediate feedback within the same clinical encounter; however, no research, to our…

  4. Stochastic modeling for reliability shocks, burn-in and heterogeneous populations

    CERN Document Server

    Finkelstein, Maxim

    2013-01-01

    Focusing on shocks modeling, burn-in and heterogeneous populations, Stochastic Modeling for Reliability naturally combines these three topics in the unified stochastic framework and presents numerous practical examples that illustrate recent theoretical findings of the authors.  The populations of manufactured items in industry are usually heterogeneous. However, the conventional reliability analysis is performed under the implicit assumption of homogeneity, which can result in distortion of the corresponding reliability indices and various misconceptions. Stochastic Modeling for Reliability fills this gap and presents the basics and further developments of reliability theory for heterogeneous populations. Specifically, the authors consider burn-in as a method of elimination of ‘weak’ items from heterogeneous populations. The real life objects are operating in a changing environment. One of the ways to model an impact of this environment is via the external shocks occurring in accordance with some stocha...

  5. On New Cautious Structural Reliability Models in the Framework of imprecise Probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev V.; Kozine, Igor

    2010-01-01

    models and gen-eralizing conventional ones to imprecise probabili-ties. The theoretical setup employed for this purpose is imprecise statistical reasoning (Walley 1991), whose general framework is provided by upper and lower previsions (expectations). The appeal of this theory is its ability to capture......Uncertainty of parameters in engineering design has been modeled in different frameworks such as inter-val analysis, fuzzy set and possibility theories, ran-dom set theory and imprecise probability theory. The authors of this paper for many years have been de-veloping new imprecise reliability...... both aleatory (stochas-tic) and epistemic uncertainty and the flexibility with which information can be represented. The previous research of the authors related to generalizing structural reliability models to impre-cise statistical measures is summarized in Utkin & Kozine (2002) and Utkin (2004...

  6. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  7. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  8. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  9. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  10. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  11. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  12. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  13. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  14. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    Science.gov (United States)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face

  15. Interrater reliability of Violence Risk Appraisal Guide scores provided in Canadian criminal proceedings.

    Science.gov (United States)

    Edens, John F; Penson, Brittany N; Ruchensky, Jared R; Cox, Jennifer; Smith, Shannon Toney

    2016-12-01

    Published research suggests that most violence risk assessment tools have relatively high levels of interrater reliability, but recent evidence of inconsistent scores among forensic examiners in adversarial settings raises concerns about the "field reliability" of such measures. This study specifically examined the reliability of Violence Risk Appraisal Guide (VRAG) scores in Canadian criminal cases identified in the legal database, LexisNexis. Over 250 reported cases were located that made mention of the VRAG, with 42 of these cases containing 2 or more scores that could be submitted to interrater reliability analyses. Overall, scores were skewed toward higher risk categories. The intraclass correlation (ICCA1) was .66, with pairs of forensic examiners placing defendants into the same VRAG risk "bin" in 68% of the cases. For categorical risk statements (i.e., low, moderate, high), examiners provided converging assessment results in most instances (86%). In terms of potential predictors of rater disagreement, there was no evidence for adversarial allegiance in our sample. Rater disagreement in the scoring of 1 VRAG item (Psychopathy Checklist-Revised; Hare, 2003), however, strongly predicted rater disagreement in the scoring of the VRAG (r = .58). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Modeling the bathtub shape hazard rate function in terms of reliability

    International Nuclear Information System (INIS)

    Wang, K.S.; Hsu, F.S.; Liu, P.P.

    2002-01-01

    In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons

  17. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  18. Experiment research on cognition reliability model of nuclear power plant

    International Nuclear Information System (INIS)

    Zhao Bingquan; Fang Xiang

    1999-01-01

    The objective of the paper is to improve the reliability of operation on real nuclear power plant of operators through the simulation research to the cognition reliability of nuclear power plant operators. The research method of the paper is to make use of simulator of nuclear power plant as research platform, to take present international research model of reliability of human cognition based on three-parameter Weibull distribution for reference, to develop and get the research model of Chinese nuclear power plant operators based on two-parameter Weibull distribution. By making use of two-parameter Weibull distribution research model of cognition reliability, the experiments about the cognition reliability of nuclear power plant operators have been done. Compared with the results of other countries such USA and Hungary, the same results can be obtained, which can do good to the safety operation of nuclear power plant

  19. Wireless Channel Modeling Perspectives for Ultra-Reliable Communications

    DEFF Research Database (Denmark)

    Eggers, Patrick Claus F.; Popovski, Petar

    2018-01-01

    Ultra-Reliable Communication (URC) is one of the distinctive features of the upcoming 5G wireless communication. The level of reliability, going down to packet error rates (PER) of $10^{-9}$, should be sufficiently convincing in order to remove cables in an industrial setting or provide remote co...

  20. Reliability in the Rasch Model

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, K.

    2007-01-01

    Roč. 43, č. 3 (2007), s. 315-326 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : Cronbach's alpha * Rasch model * reliability Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135776

  1. A reliability-risk modelling of nuclear rad-waste facilities

    International Nuclear Information System (INIS)

    Lehmann, P.H.; El-Bassioni, A.A.

    1975-01-01

    Rad-waste disposal systems of nuclear power sites are designed and operated to collect, delay, contain, and concentrate radioactive wastes from reactor plant processes such that on-site and off-site exposures to radiation are well below permissible limits. To assist the designer in achieving minimum release/exposure goals, a computerized reliability-risk model has been developed to simulate the rad-waste system. The objectives of the model are to furnish a practical tool for quantifying the effects of changes in system configuration, operation, and equipment, and for the identification of weak segments in the system design. Primarily, the model comprises a marriage of system analysis, reliability analysis, and release-risk assessment. Provisions have been included in the model to permit the optimization of the system design subject to constraints on cost and rad-releases. The system analysis phase involves the preparation of a physical and functional description of the rad-waste facility accompanied by the formation of a system tree diagram. The reliability analysis phase embodies the formulation of appropriate reliability models and the collection of model parameters. Release-risk assessment constitutes the analytical basis whereupon further system and reliability analyses may be warranted. Release-risk represents the potential for release of radioactivity and is defined as the product of an element's unreliability at time, t, and the radioactivity available for release in time interval, Δt. A computer code (RARISK) has been written to simulate the tree diagram of the rad-waste system. Reliability and release-risk results have been generated for cases which examined the process flow paths of typical rad-waste systems, the effects of repair and standby, the variations of equipment failure and repair rates, and changes in system configurations. The essential feature of this model is that a complex system like the rad-waste facility can be easily decomposed into its

  2. An auto-focusing heuristic model to increase the reliability of a scientific mission

    International Nuclear Information System (INIS)

    Gualdesi, Lavinio

    2006-01-01

    Researchers invest a lot of time and effort on the design and development of components used in a scientific mission. To capitalize on this investment and on the operational experience of the researchers, it is useful to adopt a quantitative data base to monitor the history and usage of the components. This work describes a model to monitor the reliability level of components. The model is very flexible and allows users to compose systems using the same components in different configurations as required by each mission. This tool provides availability and reliability figures for the configuration requested, derived from historical data of the components' previous performance. The system is based on preliminary checklists to establish standard operating procedures (SOP) for all components life phases. When an infringement to the SOP occurs, a quantitative ranking is provided in order to quantify the risk associated with this deviation. The final agreement between field data and expected performance of the component makes the model converge onto a heuristic monitoring system. The model automatically focuses on points of failure at the detailed component element level, calculates risks, provides alerts when a demonstrated risk to safety is encountered, and advises when there is a mismatch between component performance and mission requirements. This model also helps the mission to focus resources on critical tasks where they are most needed

  3. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  4. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  5. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  6. Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model

    Science.gov (United States)

    Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.

    2018-05-01

    The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.

  7. Reliability modeling of Clinch River breeder reactor electrical shutdown systems

    International Nuclear Information System (INIS)

    Schatz, R.A.; Duetsch, K.L.

    1974-01-01

    The initial simulation of the probabilistic properties of the Clinch River Breeder Reactor Plant (CRBRP) electrical shutdown systems is described. A model of the reliability (and availability) of the systems is presented utilizing Success State and continuous-time, discrete state Markov modeling techniques as significant elements of an overall reliability assessment process capable of demonstrating the achievement of program goals. This model is examined for its sensitivity to safe/unsafe failure rates, sybsystem redundant configurations, test and repair intervals, monitoring by reactor operators; and the control exercised over system reliability by design modifications and the selection of system operating characteristics. (U.S.)

  8. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    OpenAIRE

    Chassin, David P.; Posse, Christian

    2004-01-01

    The reliability of electric transmission systems is examined using a scale-free model of network structure and failure propagation. The topologies of the North American eastern and western electric networks are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using s...

  9. Possibilities and limitations of applying software reliability growth models to safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2007-01-01

    It is generally known that software reliability growth models such as the Jelinski-Moranda model and the Goel-Okumoto's Non-Homogeneous Poisson Process (NHPP) model cannot be applied to safety-critical software due to a lack of software failure data. In this paper, by applying two of the most widely known software reliability growth models to sample software failure data, we demonstrate the possibility of using the software reliability growth models to prove the high reliability of safety-critical software. The high sensitivity of a piece of software's reliability to software failure data, as well as a lack of sufficient software failure data, is also identified as a possible limitation when applying the software reliability growth models to safety-critical software

  10. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  11. Models for reliability and management of NDT data

    International Nuclear Information System (INIS)

    Simola, K.

    1997-01-01

    In this paper the reliability of NDT measurements was approached from three directions. We have modelled the flaw sizing performance, the probability of flaw detection, and developed models to update the knowledge of true flaw size based on sequential measurement results and flaw sizing reliability model. In discussed models the measured flaw characteristics (depth, length) are assumed to be simple functions of the true characteristics and random noise corresponding to measurement errors, and the models are based on logarithmic transforms. Models for Bayesian updating of the flaw size distributions were developed. Using these models, it is possible to take into account the prior information of the flaw size and combine it with the measured results. A Bayesian approach could contribute e. g. to the definition of an appropriate combination of practical assessments and technical justifications in NDT system qualifications, as expressed by the European regulatory bodies

  12. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  13. A Framework to Improve Communication and Reliability Between Cloud Consumer and Provider in the Cloud

    OpenAIRE

    Vivek Sridhar

    2014-01-01

    Cloud services consumers demand reliable methods for choosing appropriate cloud service provider for their requirements. Number of cloud consumer is increasing day by day and so cloud providers, hence requirement for a common platform for interacting between cloud provider and cloud consumer is also on the raise. This paper introduces Cloud Providers Market Platform Dashboard. This will act as not only just cloud provider discoverability but also provide timely report to consumer on cloud ser...

  14. Do downscaled general circulation models reliably simulate historical climatic conditions?

    Science.gov (United States)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  15. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  16. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  17. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    International Nuclear Information System (INIS)

    Boring, Ronald Laurids

    2010-01-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  18. Modular reliability modeling of the TJNAF personnel safety system

    International Nuclear Information System (INIS)

    Cinnamon, J.; Mahoney, K.

    1997-01-01

    A reliability model for the Thomas Jefferson National Accelerator Facility (formerly CEBAF) personnel safety system has been developed. The model, which was implemented using an Excel spreadsheet, allows simulation of all or parts of the system. Modularity os the model's implementation allows rapid open-quotes what if open-quotes case studies to simulate change in safety system parameters such as redundancy, diversity, and failure rates. Particular emphasis is given to the prediction of failure modes which would result in the failure of both of the redundant safety interlock systems. In addition to the calculation of the predicted reliability of the safety system, the model also calculates availability of the same system. Such calculations allow the user to make tradeoff studies between reliability and availability, and to target resources to improving those parts of the system which would most benefit from redesign or upgrade. The model includes calculated, manufacturer's data, and Jefferson Lab field data. This paper describes the model, methods used, and comparison of calculated to actual data for the Jefferson Lab personnel safety system. Examples are given to illustrate the model's utility and ease of use

  19. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  20. Modeling Parameters of Reliability of Technological Processes of Hydrocarbon Pipeline Transportation

    Directory of Open Access Journals (Sweden)

    Shalay Viktor

    2016-01-01

    Full Text Available On the basis of methods of system analysis and parametric reliability theory, the mathematical modeling of processes of oil and gas equipment operation in reliability monitoring was conducted according to dispatching data. To check the quality of empiric distribution coordination , an algorithm and mathematical methods of analysis are worked out in the on-line mode in a changing operating conditions. An analysis of physical cause-and-effect relations mechanism between the key factors and changing parameters of technical systems of oil and gas facilities is made, the basic types of technical distribution parameters are defined. Evaluation of the adequacy the analyzed parameters of the type of distribution is provided by using a criterion A.Kolmogorov, as the most universal, accurate and adequate to verify the distribution of continuous processes of complex multiple-technical systems. Methods of calculation are provided for supervising by independent bodies for risk assessment and safety facilities.

  1. Towards a reliable animal model of migraine

    DEFF Research Database (Denmark)

    Olesen, Jes; Jansen-Olesen, Inger

    2012-01-01

    The pharmaceutical industry shows a decreasing interest in the development of drugs for migraine. One of the reasons for this could be the lack of reliable animal models for studying the effect of acute and prophylactic migraine drugs. The infusion of glyceryl trinitrate (GTN) is the best validated...... and most studied human migraine model. Several attempts have been made to transfer this model to animals. The different variants of this model are discussed as well as other recent models....

  2. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  3. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    Science.gov (United States)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  4. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  5. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  6. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  7. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  8. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  9. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  10. On modeling human reliability in space flights - Redundancy and recovery operations

    Science.gov (United States)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  11. Nonspecialist Raters Can Provide Reliable Assessments of Procedural Skills

    DEFF Research Database (Denmark)

    Mahmood, Oria; Dagnæs, Julia; Bube, Sarah

    2018-01-01

    was significant (p Pearson's correlation of 0.77 for the nonspecialists and 0.75 for the specialists. The test-retest reliability showed the biggest difference between the 2 groups, 0.59 and 0.38 for the nonspecialist raters and the specialist raters, respectively (p ... was chosen as it is a simple procedural skill that is crucial to master in a resident urology program. RESULTS: The internal consistency of assessments was high, Cronbach's α = 0.93 and 0.95 for nonspecialist and specialist raters, respectively (p correlations). The interrater reliability...

  12. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  13. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    Science.gov (United States)

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  14. An Assessment of the VHTR Safety Distance Using the Reliability Physics Model

    International Nuclear Information System (INIS)

    Lee, Joeun; Kim, Jintae; Jae, Moosung

    2015-01-01

    In Korea planning the production of hydrogen using high temperature from nuclear power is in progress. To produce hydrogen from nuclear plants, supplying temperature above 800 .deg. C is required. Therefore, Very High Temperature Reactor (VHTR) which is able to provide about 950 .deg. C is suitable. In situation of high temperature and corrosion where hydrogen might be released easily, hydrogen production facility using VHTR has a danger of explosion. Moreover explosion not only has a bad influence upon facility itself but also on VHTR. Those explosions result in unsafe situation that cause serious damage. However, In terms of thermal-hydraulics view, long distance makes low efficiency Thus, in this study, a methodology for the safety assessment of safety distance between the hydrogen production facilities and the VHTR is developed with reliability physics model. Based on the standard safety criteria which is a value of 1 x 10 -6 , the safety distance between the hydrogen production facilities and the VHTR using reliability physics model are calculated to be a value of 60m - 100m. In the future, assessment for characteristic of VHTR, the capacity to resist pressure from outside hydrogen explosion and the overpressure for the large amount of detonation volume in detail is expected to identify more precise safety distance using this reliability physics model

  15. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    Science.gov (United States)

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be

  16. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  17. A Survey of Software Reliability Modeling and Estimation

    Science.gov (United States)

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  18. A multi-state reliability evaluation model for P2P networks

    International Nuclear Information System (INIS)

    Fan Hehong; Sun Xiaohan

    2010-01-01

    The appearance of new service types and the convergence tendency of the communication networks have endowed the networks more and more P2P (peer to peer) properties. These networks can be more robust and tolerant for a series of non-perfect operational states due to the non-deterministic server-client distributions. Thus a reliability model taking into account of the multi-state and non-deterministic server-client distribution properties is needed for appropriate evaluation of the networks. In this paper, two new performance measures are defined to quantify the overall and local states of the networks. A new time-evolving state-transition Monte Carlo (TEST-MC) simulation model is presented for the reliability analysis of P2P networks in multiple states. The results show that the model is not only valid for estimating the traditional binary-state network reliability parameters, but also adequate for acquiring the parameters in a series of non-perfect operational states, with good efficiencies, especially for highly reliable networks. Furthermore, the model is versatile for the reliability and maintainability analyses in that both the links and the nodes can be failure-prone with arbitrary life distributions, and various maintainability schemes can be applied.

  19. Reliability-cost models for the power switching devices of wind power converters

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    In order to satisfy the growing reliability requirements for the wind power converters with more cost-effective solution, the target of this paper is to establish a new reliability-cost model which can connect the relationship between reliability performances and corresponding semiconductor cost...... temperature mean value Tm and fluctuation amplitude ΔTj of power devices, are presented. With the proposed reliability-cost model, it is possible to enable future reliability-oriented design of the power switching devices for wind power converters, and also an evaluation benchmark for different wind power...... for power switching devices. First the conduction loss, switching loss as well as thermal impedance models of power switching devices (IGBT module) are related to the semiconductor chip number information respectively. Afterwards simplified analytical solutions, which can directly extract the junction...

  20. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  1. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  2. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

    Science.gov (United States)

    2013-07-29

    ...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...

  3. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  4. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  5. Providing reliable energy in a time of constraints : a North American concern

    International Nuclear Information System (INIS)

    Egan, T.; Turk, E.

    2008-04-01

    The reliability of the North American electricity grid was discussed. Government initiatives designed to control carbon dioxide (CO 2 ) and other emissions in some regions of Canada may lead to electricity supply constraints in other regions. A lack of investment in transmission infrastructure has resulted in constraints within the North American transmission grid, and the growth of smaller projects is now raising concerns about transmission capacity. Labour supply shortages in the electricity industry are also creating concerns about the long-term security of the electricity market. Measures to address constraints must be considered in the current context of the North American electricity system. The extensive transmission interconnects and integration between the United States and Canada will provide a framework for greater trade and market opportunities between the 2 countries. Coordinated actions and increased integration will enable Canada and the United States to increase the reliability of electricity supply. However, both countries must work cooperatively to increase generation supply using both mature and emerging technologies. The cross-border transmission grid must be enhanced by increasing transmission capacity as well as by implementing new reliability rules, building new infrastructure, and ensuring infrastructure protection. Barriers to cross-border electricity trade must be identified and avoided. Demand-side and energy efficiency measures must also be implemented. It was concluded that both countries must focus on developing strategies for addressing the environmental concerns related to electricity production. 6 figs

  6. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  7. Fuse Modeling for Reliability Study of Power Electronic Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Iannuzzo, Francesco; Blaabjerg, Frede

    2017-01-01

    This paper describes a comprehensive modeling approach on reliability of fuses used in power electronic circuits. When fuses are subjected to current pulses, cyclic temperature stress is introduced to the fuse element and will wear out the component. Furthermore, the fuse may be used in a large......, and rated voltage/current are opposed to shift in time to effect early breaking during the normal operation of the circuit. Therefore, in such cases, a reliable protection required for the other circuit components will not be achieved. The thermo-mechanical models, fatigue analysis and thermo...

  8. Modeling Message Queueing Services with Reliability Guarantee in Cloud Computing Environment Using Colored Petri Nets

    Directory of Open Access Journals (Sweden)

    Jing Li

    2015-01-01

    Full Text Available Motivated by the need for loosely coupled and asynchronous dissemination of information, message queues are widely used in large-scale application areas. With the advent of virtualization technology, cloud-based message queueing services (CMQSs with distributed computing and storage are widely adopted to improve availability, scalability, and reliability; however, a critical issue is its performance and the quality of service (QoS. While numerous approaches evaluating system performance are available, there is no modeling approach for estimating and analyzing the performance of CMQSs. In this paper, we employ both the analytical and simulation modeling to address the performance of CMQSs with reliability guarantee. We present a visibility-based modeling approach (VMA for simulation model using colored Petri nets (CPN. Our model incorporates the important features of message queueing services in the cloud such as replication, message consistency, resource virtualization, and especially the mechanism named visibility timeout which is adopted in the services to guarantee system reliability. Finally, we evaluate our model through different experiments under varied scenarios to obtain important performance metrics such as total message delivery time, waiting number, and components utilization. Our results reveal considerable insights into resource scheduling and system configuration for service providers to estimate and gain performance optimization.

  9. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Directory of Open Access Journals (Sweden)

    Jin Zhu

    2012-01-01

    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  10. RTE - 2013 Reliability Report

    International Nuclear Information System (INIS)

    Denis, Anne-Marie

    2014-01-01

    RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)

  11. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  12. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  13. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  14. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  15. On the reliability of spacecraft swarms

    NARCIS (Netherlands)

    Engelen, S.; Gill, E.K.A.; Verhoeven, C.J.M.

    2012-01-01

    Satellite swarms, consisting of a large number of identical, miniaturized and simple satellites, are claimed to provide an implementation for specific space missions which require high reliability. However, a consistent model of how reliability and availability on mission level is linked to cost-

  16. On reliability and maintenance modelling of ageing equipment in electric power systems

    International Nuclear Information System (INIS)

    Lindquist, Tommie

    2008-04-01

    Maintenance optimisation is essential to achieve cost-efficiency, availability and reliability of supply in electric power systems. The process of maintenance optimisation requires information about the costs of preventive and corrective maintenance, as well as the costs of failures borne by both electricity suppliers and customers. To calculate expected costs, information is needed about equipment reliability characteristics and the way in which maintenance affects equipment reliability. The aim of this Ph.D. work has been to develop equipment reliability models taking the effect of maintenance into account. The research has focussed on the interrelated areas of condition estimation, reliability modelling and maintenance modelling, which have been investigated in a number of case studies. In the area of condition estimation two methods to quantitatively estimate the condition of disconnector contacts have been developed, which utilise results from infrared thermography inspections and contact resistance measurements. The accuracy of these methods were investigated in two case studies. Reliability models have been developed and implemented for SF6 circuit-breakers, disconnector contacts and XLPE cables in three separate case studies. These models were formulated using both empirical and physical modelling approaches. To improve confidence in such models a Bayesian statistical method incorporating information from the equipment design process was also developed. This method was illustrated in a case study of SF6 circuit-breaker operating rods. Methods for quantifying the effect of maintenance on equipment condition and reliability have been investigated in case studies on disconnector contacts and SF6 circuit-breakers. The input required by these methods are condition measurements and historical failure and maintenance data, respectively. This research has demonstrated that the effect of maintenance on power system equipment may be quantified using available data

  17. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  18. Model reliability and software quality assurance in simulation of nuclear fuel waste management systems

    International Nuclear Information System (INIS)

    Oeren, T.I.; Elzas, M.S.; Sheng, G.; Wageningen Agricultural Univ., Netherlands; McMaster Univ., Hamilton, Ontario)

    1985-01-01

    As is the case with all scientific simulation studies, computerized simulation of nuclear fuel waste management systems can introduce and hide various types of errors. Frameworks to clarify issues of model reliability and software quality assurance are offered. Potential problems with reference to the main areas of concern for reliability and quality are discussed; e.g., experimental issues, decomposition, scope, fidelity, verification, requirements, testing, correctness, robustness are treated with reference to the experience gained in the past. A list comprising over 80 most common computerization errors is provided. Software tools and techniques used to detect and to correct computerization errors are discussed

  19. The reliability of the Adelaide in-shoe foot model.

    Science.gov (United States)

    Bishop, Chris; Hillier, Susan; Thewlis, Dominic

    2017-07-01

    Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  1. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  2. Wind Farm Reliability Modelling Using Bayesian Networks and Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Robert Adam Sobolewski

    2015-09-01

    Full Text Available Technical reliability plays an important role among factors affecting the power output of a wind farm. The reliability is determined by an internal collection grid topology and reliability of its electrical components, e.g. generators, transformers, cables, switch breakers, protective relays, and busbars. A wind farm reliability’s quantitative measure can be the probability distribution of combinations of operating and failed states of the farm’s wind turbines. The operating state of a wind turbine is its ability to generate power and to transfer it to an external power grid, which means the availability of the wind turbine and other equipment necessary for the power transfer to the external grid. This measure can be used for quantitative analysis of the impact of various wind farm topologies and the reliability of individual farm components on the farm reliability, and for determining the expected farm output power with consideration of the reliability. This knowledge may be useful in an analysis of power generation reliability in power systems. The paper presents probabilistic models that quantify the wind farm reliability taking into account the above-mentioned technical factors. To formulate the reliability models Bayesian networks and semi-Markov processes were used. Using Bayesian networks the wind farm structural reliability was mapped, as well as quantitative characteristics describing equipment reliability. To determine the characteristics semi-Markov processes were used. The paper presents an example calculation of: (i probability distribution of the combination of both operating and failed states of four wind turbines included in the wind farm, and (ii expected wind farm output power with consideration of its reliability.

  3. Development of Markov model of emergency diesel generator for dynamic reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Young Ho; Choi, Sun Yeong; Yang, Joon Eon [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-02-01

    The EDG (Emergency Diesal Generator) of nuclear power plant is one of the most important equipments in mitigating accidents. The FT (Fault Tree) method is widely used to assess the reliability of safety systems like an EDG in nuclear power plant. This method, however, has limitations in modeling dynamic features of safety systems exactly. We, hence, have developed a Markov model to represent the stochastic process of dynamic systems whose states change as time moves on. The Markov model enables us to develop a dynamic reliability model of EDG. This model can represent all possible states of EDG comparing to the FRANTIC code developed by U.S. NRC for the reliability analysis of standby systems. to access the regulation policy for test interval, we performed two simulations based on the generic data and plant specific data of YGN 3, respectively by using the developed model. We also estimate the effects of various repair rates and the fractions of starting failures by demand shock to the reliability of EDG. And finally, Aging effect is analyzed. (author). 23 refs., 19 figs., 9 tabs.

  4. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  5. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  6. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  7. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time ...

  8. Supersonic shear imaging provides a reliable measurement of resting muscle shear elastic modulus

    International Nuclear Information System (INIS)

    Lacourpaille, Lilian; Hug, François; Bouillard, Killian; Nordez, Antoine; Hogrel, Jean-Yves

    2012-01-01

    The aim of the present study was to assess the reliability of shear elastic modulus measurements performed using supersonic shear imaging (SSI) in nine resting muscles (i.e. gastrocnemius medialis, tibialis anterior, vastus lateralis, rectus femoris, triceps brachii, biceps brachii, brachioradialis, adductor pollicis obliquus and abductor digiti minimi) of different architectures and typologies. Thirty healthy subjects were randomly assigned to the intra-session reliability (n = 20), inter-day reliability (n = 21) and the inter-observer reliability (n = 16) experiments. Muscle shear elastic modulus ranged from 2.99 (gastrocnemius medialis) to 4.50 kPa (adductor digiti minimi and tibialis anterior). On the whole, very good reliability was observed, with a coefficient of variation (CV) ranging from 4.6% to 8%, except for the inter-operator reliability of adductor pollicis obliquus (CV = 11.5%). The intraclass correlation coefficients were good (0.871 ± 0.045 for the intra-session reliability, 0.815 ± 0.065 for the inter-day reliability and 0.709 ± 0.141 for the inter-observer reliability). Both the reliability and the ease of use of SSI make it a potentially interesting technique that would be of benefit to fundamental, applied and clinical research projects that need an accurate assessment of muscle mechanical properties. (note)

  9. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  10. Modeling reliability of power systems substations by using stochastic automata networks

    International Nuclear Information System (INIS)

    Šnipas, Mindaugas; Radziukynas, Virginijus; Valakevičius, Eimutis

    2017-01-01

    In this paper, stochastic automata networks (SANs) formalism to model reliability of power systems substations is applied. The proposed strategy allows reducing the size of state space of Markov chain model and simplifying system specification. Two case studies of standard configurations of substations are considered in detail. SAN models with different assumptions were created. SAN approach is compared with exact reliability calculation by using a minimal path set method. Modeling results showed that total independence of automata can be assumed for relatively small power systems substations with reliable equipment. In this case, the implementation of Markov chain model by a using SAN method is a relatively easy task. - Highlights: • We present the methodology to apply stochastic automata network formalism to create Markov chain models of power systems. • The stochastic automata network approach is combined with minimal path sets and structural functions. • Two models of substation configurations with different model assumptions are presented to illustrate the proposed methodology. • Modeling results of system with independent automata and functional transition rates are similar. • The conditions when total independence of automata can be assumed are addressed.

  11. Theory model and experiment research about the cognition reliability of nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; Zhao Bingquan

    2000-01-01

    In order to improve the reliability of NPP operation, the simulation research on the reliability of nuclear power plant operators is needed. Making use of simulator of nuclear power plant as research platform, and taking the present international reliability research model-human cognition reliability for reference, the part of the model is modified according to the actual status of Chinese nuclear power plant operators and the research model of Chinese nuclear power plant operators obtained based on two-parameter Weibull distribution. Experiments about the reliability of nuclear power plant operators are carried out using the two-parameter Weibull distribution research model. Compared with those in the world, the same results are achieved. The research would be beneficial to the operation safety of nuclear power plant

  12. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  13. Reliability modeling and analysis for a novel design of modular converter system of wind turbines

    International Nuclear Information System (INIS)

    Zhang, Cai Wen; Zhang, Tieling; Chen, Nan; Jin, Tongdan

    2013-01-01

    Converters play a vital role in wind turbines. The concept of modularity is gaining in popularity in converter design for modern wind turbines in order to achieve high reliability as well as cost-effectiveness. In this study, we are concerned with a novel topology of modular converter invented by Hjort, Modular converter system with interchangeable converter modules. World Intellectual Property Organization, Pub. No. WO29027520 A2; 5 March 2009, in this architecture, the converter comprises a number of identical and interchangeable basic modules. Each module can operate in either AC/DC or DC/AC mode, depending on whether it functions on the generator or the grid side. Moreover, each module can be reconfigured from one side to the other, depending on the system’s operational requirements. This is a shining example of full-modular design. This paper aims to model and analyze the reliability of such a modular converter. A Markov modeling approach is applied to the system reliability analysis. In particular, six feasible converter system models based on Hjort’s architecture are investigated. Through numerical analyses and comparison, we provide insights and guidance for converter designers in their decision-making.

  14. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  15. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  16. Piping reliability model development, validation and its applications to light water reactor piping

    International Nuclear Information System (INIS)

    Woo, H.H.

    1983-01-01

    A brief description is provided of a three-year effort undertaken by the Lawrence Livermore National Laboratory for the piping reliability project. The ultimate goal of this project is to provide guidance for nuclear piping design so that high-reliability piping systems can be built. Based on the results studied so far, it is concluded that the reliability approach can undoubtedly help in understanding not only how to assess and improve the safety of the piping systems but also how to design more reliable piping systems

  17. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  18. A Reliability-Oriented Design Method for Power Electronic Converters

    DEFF Research Database (Denmark)

    Wang, Huai; Zhou, Dao; Blaabjerg, Frede

    2013-01-01

    Reliability is a crucial performance indicator of power electronic systems in terms of availability, mission accomplishment and life cycle cost. A paradigm shift in the research on reliability of power electronics is going on from simple handbook based calculations (e.g. models in MIL-HDBK-217F h...... and reliability prediction models are provided. A case study on a 2.3 MW wind power converter is discussed with emphasis on the reliability critical component IGBT modules....

  19. Modelling Reliability of Supply and Infrastructural Dependency in Energy Distribution Systems

    OpenAIRE

    Helseth, Arild

    2008-01-01

    This thesis presents methods and models for assessing reliability of supply and infrastructural dependency in energy distribution systems with multiple energy carriers. The three energy carriers of electric power, natural gas and district heating are considered. Models and methods for assessing reliability of supply in electric power systems are well documented, frequently applied in the industry and continuously being subject to research and improvement. On the contrary, there are compar...

  20. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  1. Fault recovery in the reliable multicast protocol

    Science.gov (United States)

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian

    1995-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  2. Modeling the reliability and maintenance costs of wind turbines using Weibull analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vachon, W.A. [W.A. Vachon & Associates, Inc., Manchester, MA (United States)

    1996-12-31

    A general description is provided of the basic mathematics and use of Weibull statistical models for modeling component failures and maintenance costs as a function of time. The applicability of the model to wind turbine components and subsystems is discussed with illustrative examples of typical component reliabilities drawn from actual field experiences. Example results indicate the dominant role of key subsystems based on a combination of their failure frequency and repair/replacement costs. The value of the model is discussed as a means of defining (1) maintenance practices, (2) areas in which to focus product improvements, (3) spare parts inventory, and (4) long-term trends in maintenance costs as an important element in project cash flow projections used by developers, investors, and lenders. 6 refs., 8 figs., 3 tabs.

  3. A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques

    Science.gov (United States)

    Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.

    2017-05-01

    With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.

  4. Centralized Bayesian reliability modelling with sensor networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 19, č. 5 (2013), s. 471-482 ISSN 1387-3954 R&D Projects: GA MŠk 7D12004 Grant - others:GA MŠk(CZ) SVV-265315 Keywords : Bayesian modelling * Sensor network * Reliability Subject RIV: BD - Theory of Information Impact factor: 0.984, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0392551.pdf

  5. Test Reliability at the Individual Level

    Science.gov (United States)

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  6. On new cautious structural reliability models in the framework of imprecise probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev; Kozine, Igor

    2010-01-01

    measures when the number of events of interest or observations is very small. The main feature of the models is that prior ignorance is not modelled by a fixed single prior distribution, but by a class of priors which is defined by upper and lower probabilities that can converge as statistical data......New imprecise structural reliability models are described in this paper. They are developed based on the imprecise Bayesian inference and are imprecise Dirichlet, imprecise negative binomial, gamma-exponential and normal models. The models are applied to computing cautious structural reliability...

  7. Models of Information Security Highly Reliable Computing Systems

    Directory of Open Access Journals (Sweden)

    Vsevolod Ozirisovich Chukanov

    2016-03-01

    Full Text Available Methods of the combined reservation are considered. The models of reliability of systems considering parameters of restoration and prevention of blocks of system are described. Ratios for average quantity prevention and an availability quotient of blocks of system are given.

  8. Probabilistic risk assessment course documentation. Volume 3. System reliability and analysis techniques, Session A - reliability

    International Nuclear Information System (INIS)

    Lofgren, E.V.

    1985-08-01

    This course in System Reliability and Analysis Techniques focuses on the quantitative estimation of reliability at the systems level. Various methods are reviewed, but the structure provided by the fault tree method is used as the basis for system reliability estimates. The principles of fault tree analysis are briefly reviewed. Contributors to system unreliability and unavailability are reviewed, models are given for quantitative evaluation, and the requirements for both generic and plant-specific data are discussed. Also covered are issues of quantifying component faults that relate to the systems context in which the components are embedded. All reliability terms are carefully defined. 44 figs., 22 tabs

  9. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  10. Introduction to quality and reliability engineering

    CERN Document Server

    Jiang, Renyan

    2015-01-01

    This book presents the state-of-the-art in quality and reliability engineering from a product life cycle standpoint. Topics in reliability include reliability models, life data analysis and modeling, design for reliability and accelerated life testing, while topics in quality include design for quality, acceptance sampling and supplier selection, statistical process control, production tests such as screening and burn-in, warranty and maintenance. The book provides comprehensive insights into two closely related subjects, and includes a wealth of examples and problems to enhance reader comprehension and link theory and practice. All numerical examples can be easily solved using Microsoft Excel. The book is intended for senior undergraduate and post-graduate students in related engineering and management programs such as mechanical engineering, manufacturing engineering, industrial engineering and engineering management programs, as well as for researchers and engineers in the quality and reliability fields. D...

  11. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  12. Dynamic reliability of digital-based transmitters

    Energy Technology Data Exchange (ETDEWEB)

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France) and Universite de Technologie de Troyes - UTT, Institut Charles Delaunay - ICD and UMR CNRS 6279 STMR, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France); Smidts, Carol [Ohio State University (OSU), Nuclear Engineering Program, Department of Mechanical Engineering, Scott Laboratory, 201 W 19th Ave, Columbus OH 43210 (United States); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and UMR CNRS 6279 STMR, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France)

    2011-07-15

    Dynamic reliability explicitly handles the interactions between the stochastic behaviour of system components and the deterministic behaviour of process variables. While dynamic reliability provides a more efficient and realistic way to perform probabilistic risk assessment than 'static' approaches, its industrial level applications are still limited. Factors contributing to this situation are the inherent complexity of the theory and the lack of a generic platform. More recently the increased use of digital-based systems has also introduced additional modelling challenges related to specific interactions between system components. Typical examples are the 'intelligent transmitters' which are able to exchange information, and to perform internal data processing and advanced functionalities. To make a contribution to solving these challenges, the mathematical framework of dynamic reliability is extended to handle the data and information which are processed and exchanged between systems components. Stochastic deviations that may affect system properties are also introduced to enhance the modelling of failures. A formalized Petri net approach is then presented to perform the corresponding reliability analyses using numerical methods. Following this formalism, a versatile model for the dynamic reliability modelling of digital-based transmitters is proposed. Finally the framework's flexibility and effectiveness is demonstrated on a substantial case study involving a simplified model of a nuclear fast reactor.

  13. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  14. Polarimetric SAR interferometry-based decomposition modelling for reliable scattering retrieval

    Science.gov (United States)

    Agrawal, Neeraj; Kumar, Shashi; Tolpekin, Valentyn

    2016-05-01

    Fully Polarimetric SAR (PolSAR) data is used for scattering information retrieval from single SAR resolution cell. Single SAR resolution cell may contain contribution from more than one scattering objects. Hence, single or dual polarized data does not provide all the possible scattering information. So, to overcome this problem fully Polarimetric data is used. It was observed in previous study that fully Polarimetric data of different dates provide different scattering values for same object and coefficient of determination obtained from linear regression between volume scattering and aboveground biomass (AGB) shows different values for the SAR dataset of different dates. Scattering values are important input elements for modelling of forest aboveground biomass. In this research work an approach is proposed to get reliable scattering from interferometric pair of fully Polarimetric RADARSAT-2 data. The field survey for data collection was carried out for Barkot forest during November 10th to December 5th, 2014. Stratified random sampling was used to collect field data for circumference at breast height (CBH) and tree height measurement. Field-measured AGB was compared with the volume scattering elements obtained from decomposition modelling of individual PolSAR images and PolInSAR coherency matrix. Yamaguchi 4-component decomposition was implemented to retrieve scattering elements from SAR data. PolInSAR based decomposition was the great challenge in this work and it was implemented with certain assumptions to create Hermitian coherency matrix with co-registered polarimetric interferometric pair of SAR data. Regression analysis between field-measured AGB and volume scattering element obtained from PolInSAR data showed highest (0.589) coefficient of determination. The same regression with volume scattering elements of individual SAR images showed 0.49 and 0.50 coefficients of determination for master and slave images respectively. This study recommends use of

  15. Intra-observer reliability and agreement of manual and digital orthodontic model analysis.

    Science.gov (United States)

    Koretsi, Vasiliki; Tingelhoff, Linda; Proff, Peter; Kirschneck, Christian

    2018-01-23

    Digital orthodontic model analysis is gaining acceptance in orthodontics, but its reliability is dependent on the digitalisation hardware and software used. We thus investigated intra-observer reliability and agreement / conformity of a particular digital model analysis work-flow in relation to traditional manual plaster model analysis. Forty-eight plaster casts of the upper/lower dentition were collected. Virtual models were obtained with orthoX®scan (Dentaurum) and analysed with ivoris®analyze3D (Computer konkret). Manual model analyses were done with a dial caliper (0.1 mm). Common parameters were measured on each plaster cast and its virtual counterpart five times each by an experienced observer. We assessed intra-observer reliability within method (ICC), agreement/conformity between methods (Bland-Altman analyses and Lin's concordance correlation), and changing bias (regression analyses). Intra-observer reliability was substantial within each method (ICC ≥ 0.7), except for five manual outcomes (12.8 per cent). Bias between methods was statistically significant, but less than 0.5 mm for 87.2 per cent of the outcomes. In general, larger tooth sizes were measured digitally. Total difference maxilla and mandible had wide limits of agreement (-3.25/6.15 and -2.31/4.57 mm), but bias between methods was mostly smaller than intra-observer variation within each method with substantial conformity of manual and digital measurements in general. No changing bias was detected. Although both work-flows were reliable, the investigated digital work-flow proved to be more reliable and yielded on average larger tooth sizes. Averaged differences between methods were within 0.5 mm for directly measured outcomes but wide ranges are expected for some computed space parameters due to cumulative error. © The Author 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  16. Age-dependent reliability model considering effects of maintenance and working conditions

    International Nuclear Information System (INIS)

    Martorell, Sebastian; Sanchez, Ana; Serradell, Vicente

    1999-01-01

    Nowadays, there is some doubt about building new nuclear power plants (NPPs). Instead, there is a growing interest in analyzing the possibility to extend current NPP operation, where life management programs play an important role. The evolution of the NPP safety depends on the evolution of the reliability of its safety components, which, in turn, is a function of their age along the NPP operational life. In this paper, a new age-dependent reliability model is presented, which includes parameters related to surveillance and maintenance effectiveness and working conditions of the equipment, both environmental and operational. This model may be used to support NPP life management and life extension programs, by improving or optimizing surveillance and maintenance tasks using risk and cost models based on such an age-dependent reliability model. The results of the sensitivity study in the example application show that the selection of the most appropriate maintenance strategy would directly depend on the previous parameters. Then, very important differences are expected to appear under certain circumstances, particularly, in comparison with other models that do not consider maintenance effectiveness and working conditions simultaneously

  17. Software reliability growth model for safety systems of nuclear reactor

    International Nuclear Information System (INIS)

    Thirugnana Murthy, D.; Murali, N.; Sridevi, T.; Satya Murty, S.A.V.; Velusamy, K.

    2014-01-01

    The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our, modern society.Software failures have impaired several high visibility programs in space, telecommunications, defense and health industries. Besides the costs involved, it setback the projects. The ways of quantifying it and using it for improvement and control of the software development and maintenance process. This paper discusses need for systematic approaches for measuring and assuring software reliability which is a major share of project development resources. It covers the reliability models with the concern on 'Reliability Growth'. It includes data collection on reliability, statistical estimation and prediction, metrics and attributes of product architecture, design, software development, and the operational environment. Besides its use for operational decisions like deployment, it includes guiding software architecture, development, testing and verification and validation. (author)

  18. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    Science.gov (United States)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  19. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  20. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  1. Methodology for allocating reliability and risk

    International Nuclear Information System (INIS)

    Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.

    1986-05-01

    This report describes a methodology for reliability and risk allocation in nuclear power plants. The work investigates the technical feasibility of allocating reliability and risk, which are expressed in a set of global safety criteria and which may not necessarily be rigid, to various reactor systems, subsystems, components, operations, and structures in a consistent manner. The report also provides general discussions on the problem of reliability and risk allocation. The problem is formulated as a multiattribute decision analysis paradigm. The work mainly addresses the first two steps of a typical decision analysis, i.e., (1) identifying alternatives, and (2) generating information on outcomes of the alternatives, by performing a multiobjective optimization on a PRA model and reliability cost functions. The multiobjective optimization serves as the guiding principle to reliability and risk allocation. The concept of ''noninferiority'' is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The final step of decision analysis, i.e., assessment of the decision maker's preferences could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided, and several outstanding issues such as generic allocation, preference assessment, and uncertainty are discussed. 29 refs., 44 figs., 39 tabs

  2. Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.

    Science.gov (United States)

    Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti

    2017-06-28

    The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.

  3. Fast Monte Carlo reliability evaluation using support vector machine

    International Nuclear Information System (INIS)

    Rocco, Claudio M.; Moreno, Jose Ali

    2002-01-01

    This paper deals with the feasibility of using support vector machine (SVM) to build empirical models for use in reliability evaluation. The approach takes advantage of the speed of SVM in the numerous model calculations typically required to perform a Monte Carlo reliability evaluation. The main idea is to develop an estimation algorithm, by training a model on a restricted data set, and replace system performance evaluation by a simpler calculation, which provides reasonably accurate model outputs. The proposed approach is illustrated by several examples. Excellent system reliability results are obtained by training a SVM with a small amount of information

  4. Modeling cognition dynamics and its application to human reliability analysis

    International Nuclear Information System (INIS)

    Mosleh, A.; Smidts, C.; Shen, S.H.

    1996-01-01

    For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects

  5. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  6. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  7. Reliability and Maintainability Model (RAM): User and Maintenance Manual. Part 2; Improved Supportability Analysis

    Science.gov (United States)

    Ebeling, Charles E.

    1996-01-01

    This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. This Manual updates and supersedes the 1995 RAM User and Maintenance Manual. Changes and enhancements from the 1995 version of the model are primarily a result of the addition of more recent aircraft and shuttle R&M data.

  8. Cluster-based upper body marker models for three-dimensional kinematic analysis: Comparison with an anatomical model and reliability analysis.

    Science.gov (United States)

    Boser, Quinn A; Valevicius, Aïda M; Lavoie, Ewen B; Chapman, Craig S; Pilarski, Patrick M; Hebert, Jacqueline S; Vette, Albert H

    2018-04-27

    Quantifying angular joint kinematics of the upper body is a useful method for assessing upper limb function. Joint angles are commonly obtained via motion capture, tracking markers placed on anatomical landmarks. This method is associated with limitations including administrative burden, soft tissue artifacts, and intra- and inter-tester variability. An alternative method involves the tracking of rigid marker clusters affixed to body segments, calibrated relative to anatomical landmarks or known joint angles. The accuracy and reliability of applying this cluster method to the upper body has, however, not been comprehensively explored. Our objective was to compare three different upper body cluster models with an anatomical model, with respect to joint angles and reliability. Non-disabled participants performed two standardized functional upper limb tasks with anatomical and cluster markers applied concurrently. Joint angle curves obtained via the marker clusters with three different calibration methods were compared to those from an anatomical model, and between-session reliability was assessed for all models. The cluster models produced joint angle curves which were comparable to and highly correlated with those from the anatomical model, but exhibited notable offsets and differences in sensitivity for some degrees of freedom. Between-session reliability was comparable between all models, and good for most degrees of freedom. Overall, the cluster models produced reliable joint angles that, however, cannot be used interchangeably with anatomical model outputs to calculate kinematic metrics. Cluster models appear to be an adequate, and possibly advantageous alternative to anatomical models when the objective is to assess trends in movement behavior. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  10. The explicit treatment of model uncertainties in the presence of aleatory and epistemic parameter uncertainties in risk and reliability analysis

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Yang, Joon Eon

    2003-01-01

    In the risk and reliability analysis of complex technological systems, the primary concern of formal uncertainty analysis is to understand why uncertainties arise, and to evaluate how they impact the results of the analysis. In recent times, many of the uncertainty analyses have focused on parameters of the risk and reliability analysis models, whose values are uncertain in an aleatory or an epistemic way. As the field of parametric uncertainty analysis matures, however, more attention is being paid to the explicit treatment of uncertainties that are addressed in the predictive model itself as well as the accuracy of the predictive model. The essential steps for evaluating impacts of these model uncertainties in the presence of parameter uncertainties are to determine rigorously various sources of uncertainties to be addressed in an underlying model itself and in turn model parameters, based on our state-of-knowledge and relevant evidence. Answering clearly the question of how to characterize and treat explicitly the forgoing different sources of uncertainty is particularly important for practical aspects such as risk and reliability optimization of systems as well as more transparent risk information and decision-making under various uncertainties. The main purpose of this paper is to provide practical guidance for quantitatively treating various model uncertainties that would often be encountered in the risk and reliability modeling process of complex technological systems

  11. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  12. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  13. Improvement of the reliability graph with general gates to analyze the reliability of dynamic systems that have various operation modes

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Seung Ki [Div. of Research Reactor System Design, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); No, Young Gyu; Seong, Poong Hyun [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2016-04-15

    The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant.

  14. Improvement of the reliability graph with general gates to analyze the reliability of dynamic systems that have various operation modes

    International Nuclear Information System (INIS)

    Shin, Seung Ki; No, Young Gyu; Seong, Poong Hyun

    2016-01-01

    The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant

  15. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  16. Scaled CMOS Technology Reliability Users Guide

    Science.gov (United States)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  17. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    Energy Technology Data Exchange (ETDEWEB)

    Pegg, E.C., E-mail: elise.pegg@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Mellon, S.J., E-mail: stephen.mellon@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Salmon, G. [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Alvand, A., E-mail: abtin.alvand@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Pandit, H., E-mail: hemant.pandit@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Murray, D.W., E-mail: david.murray@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Gill, H.S., E-mail: richie.gill@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom)

    2012-10-15

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements.

  18. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    International Nuclear Information System (INIS)

    Pegg, E.C.; Mellon, S.J.; Salmon, G.; Alvand, A.; Pandit, H.; Murray, D.W.; Gill, H.S.

    2012-01-01

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements

  19. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  20. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    Science.gov (United States)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  1. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  2. The cognitive environment simulation as a tool for modeling human performance and reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Pople, H. Jr.; Roth, E.M.

    1990-01-01

    The US Nuclear Regulatory Commission is sponsoring a research program to develop improved methods to model the cognitive behavior of nuclear power plant (NPP) personnel. Under this program, a tool for simulating how people form intentions to act in NPP emergency situations was developed using artificial intelligence (AI) techniques. This tool is called Cognitive Environment Simulation (CES). The Cognitive Reliability Assessment Technique (or CREATE) was also developed to specify how CBS can be used to enhance the measurement of the human contribution to risk in probabilistic risk assessment (PRA) studies. The next step in the research program was to evaluate the modeling tool and the method for using the tool for Human Reliability Analysis (HRA) in PRAs. Three evaluation activities were conducted. First, a panel of highly distinguished experts in cognitive modeling, AI, PRA and HRA provided a technical review of the simulation development work. Second, based on panel recommendations, CES was exercised on a family of steam generator tube rupture incidents where empirical data on operator performance already existed. Third, a workshop with HRA practitioners was held to analyze a worked example of the CREATE method to evaluate the role of CES/CREATE in HRA. The results of all three evaluations indicate that CES/CREATE represents a promising approach to modeling operator intention formation during emergency operations

  3. Damage Model for Reliability Assessment of Solder Joints in Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    environmental factors. Reliability assessment for such type of products conventionally is performed by classical reliability techniques based on test data. Usually conventional reliability approaches are time and resource consuming activities. Thus in this paper we choose a physics of failure approach to define...... damage model by Miner’s rule. Our attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Based on the proposed method it is described how to find the damage level for a given temperature loading profile. The proposed method is discussed...

  4. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  5. Tracking reliability for space cabin-borne equipment in development by Crow model.

    Science.gov (United States)

    Chen, J D; Jiao, S J; Sun, H L

    2001-12-01

    Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.

  6. Foundations for a time reliability correlation system to quantify human reliability

    International Nuclear Information System (INIS)

    Dougherty, E.M. Jr.; Fragola, J.R.

    1988-01-01

    Time reliability correlations (TRCs) have been used in human reliability analysis (HRA) in conjunction with probabilistic risk assessment (PRA) to quantify post-initiator human failure events. The first TRCs were judgmental but recent data taken from simulators have provided evidence for development of a system of TRCs. This system has the equational form: t = tau R X tau U , where the first factor is the lognormally distributed random variable of successful response time, derived from the simulator data, and the second factor is a unitary lognormal random variable to account for uncertainty in the model. The first random variable is further factored into a median response time and a factor to account for the dominant type of behavior assumed to be involved in the response and a second factor to account for other influences on the reliability of the response

  7. Modeling of humidity-related reliability in enclosures with electronics

    DEFF Research Database (Denmark)

    Hygum, Morten Arnfeldt; Popok, Vladimir

    2015-01-01

    Reliability of electronics that operate outdoor is strongly affected by environmental factors such as temperature and humidity. Fluctuations of these parameters can lead to water condensation inside enclosures. Therefore, modelling of humidity distribution in a container with air and freely exposed...

  8. Reliability measures in managing GI bleeding.

    Science.gov (United States)

    Sonnenberg, Amnon

    2012-06-01

    Multiple procedures and devices are used in a complex interplay to diagnose and treat GI bleeding. To model how a large variety of diagnostic and therapeutic components interact in the successful management of GI bleeding. The analysis uses the concept of reliability block diagrams from probability theory to model management outcome. Separate components of the management process are arranged in a serial or parallel fashion. If the outcome depends on the function of each component individually, such components are modeled to be arranged in series. If components complement each other and can mutually compensate for each of their failures, such components are arranged in a parallel fashion. General endoscopy practice. Patients with GI bleeding of unknown etiology. All available endoscopic and radiographic means to diagnose and treat GI bleeding. Process reliability in achieving hemostasis. Serial arrangements tend to reduce process reliability, whereas parallel arrangements increase it. Whenever possible, serial components should be bridged and complemented by additional alternative (parallel) routes of operation. Parallel components with low individual reliability can still contribute to overall process reliability as long as they function independently of other pre-existing alternatives. Probability of success associated with individual components is partly unknown. Modeling management of GI bleeding by a reliability block diagram provides a useful tool in assessing the impact of individual endoscopic techniques and administrative structures on the overall outcome. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  9. Reliability Analysis of a Steel Frame

    Directory of Open Access Journals (Sweden)

    M. Sýkora

    2002-01-01

    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  10. Multivariate performance reliability prediction in real-time

    International Nuclear Information System (INIS)

    Lu, S.; Lu, H.; Kolarik, W.J.

    2001-01-01

    This paper presents a technique for predicting system performance reliability in real-time considering multiple failure modes. The technique includes on-line multivariate monitoring and forecasting of selected performance measures and conditional performance reliability estimates. The performance measures across time are treated as a multivariate time series. A state-space approach is used to model the multivariate time series. Recursive forecasting is performed by adopting Kalman filtering. The predicted mean vectors and covariance matrix of performance measures are used for the assessment of system survival/reliability with respect to the conditional performance reliability. The technique and modeling protocol discussed in this paper provide a means to forecast and evaluate the performance of an individual system in a dynamic environment in real-time. The paper also presents an example to demonstrate the technique

  11. Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment

    Science.gov (United States)

    Davis, M. R.; Kamins, M.; Mooz, W. E.

    1978-01-01

    A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.

  12. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  13. Telecommunications system reliability engineering theory and practice

    CERN Document Server

    Ayers, Mark L

    2012-01-01

    "Increasing system complexity require new, more sophisticated tools for system modeling and metric calculation. Bringing the field up to date, this book provides telecommunications engineers with practical tools for analyzing, calculating, and reporting availability, reliability, and maintainability metrics. It gives the background in system reliability theory and covers in-depth applications in fiber optic networks, microwave networks, satellite networks, power systems, and facilities management. Computer programming tools for simulating the approaches presented, using the Matlab software suite, are also provided"

  14. Reliability-based condition assessment of steel containment and liners

    International Nuclear Information System (INIS)

    Ellingwood, B.; Bhattacharya, B.; Zheng, R.

    1996-11-01

    Steel containments and liners in nuclear power plants may be exposed to aggressive environments that may cause their strength and stiffness to decrease during the plant service life. Among the factors recognized as having the potential to cause structural deterioration are uniform, pitting or crevice corrosion; fatigue, including crack initiation and propagation to fracture; elevated temperature; and irradiation. The evaluation of steel containments and liners for continued service must provide assurance that they are able to withstand future extreme loads during the service period with a level of reliability that is sufficient for public safety. Rational methodologies to provide such assurances can be developed using modern structural reliability analysis principles that take uncertainties in loading, strength, and degradation resulting from environmental factors into account. The research described in this report is in support of the Steel Containments and Liners Program being conducted for the US Nuclear Regulatory Commission by the Oak Ridge National Laboratory. The research demonstrates the feasibility of using reliability analysis as a tool for performing condition assessments and service life predictions of steel containments and liners. Mathematical models that describe time-dependent changes in steel due to aggressive environmental factors are identified, and statistical data supporting the use of these models in time-dependent reliability analysis are summarized. The analysis of steel containment fragility is described, and simple illustrations of the impact on reliability of structural degradation are provided. The role of nondestructive evaluation in time-dependent reliability analysis, both in terms of defect detection and sizing, is examined. A Markov model provides a tool for accounting for time-dependent changes in damage condition of a structural component or system. 151 refs

  15. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  16. Reliability evaluation of microgrid considering incentive-based demand response

    Science.gov (United States)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  17. Durability reliability analysis for corroding concrete structures under uncertainty

    Science.gov (United States)

    Zhang, Hao

    2018-02-01

    This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.

  18. STARS software tool for analysis of reliability and safety

    International Nuclear Information System (INIS)

    Poucet, A.; Guagnini, E.

    1989-01-01

    This paper reports on the STARS (Software Tool for the Analysis of Reliability and Safety) project aims at developing an integrated set of Computer Aided Reliability Analysis tools for the various tasks involved in systems safety and reliability analysis including hazard identification, qualitative analysis, logic model construction and evaluation. The expert system technology offers the most promising perspective for developing a Computer Aided Reliability Analysis tool. Combined with graphics and analysis capabilities, it can provide a natural engineering oriented environment for computer assisted reliability and safety modelling and analysis. For hazard identification and fault tree construction, a frame/rule based expert system is used, in which the deductive (goal driven) reasoning and the heuristic, applied during manual fault tree construction, is modelled. Expert system can explain their reasoning so that the analyst can become aware of the why and the how results are being obtained. Hence, the learning aspect involved in manual reliability and safety analysis can be maintained and improved

  19. Modeling and simulation of a controlled steam generator in the context of dynamic reliability using a Stochastic Hybrid Automaton

    International Nuclear Information System (INIS)

    Babykina, Génia; Brînzei, Nicolae; Aubry, Jean-François; Deleuze, Gilles

    2016-01-01

    The paper proposes a modeling framework to support Monte Carlo simulations of the behavior of a complex industrial system. The aim is to analyze the system dependability in the presence of random events, described by any type of probability distributions. Continuous dynamic evolutions of physical parameters are taken into account by a system of differential equations. Dynamic reliability is chosen as theoretical framework. Based on finite state automata theory, the formal model is built by parallel composition of elementary sub-models using a bottom-up approach. Considerations of a stochastic nature lead to a model called the Stochastic Hybrid Automaton. The Scilab/Scicos open source environment is used for implementation. The case study is carried out on an example of a steam generator of a nuclear power plant. The behavior of the system is studied by exploring its trajectories. Possible system trajectories are analyzed both empirically, using the results of Monte Carlo simulations, and analytically, using the formal system model. The obtained results are show to be relevant. The Stochastic Hybrid Automaton appears to be a suitable tool to address the dynamic reliability problem and to model real systems of high complexity; the bottom-up design provides precision and coherency of the system model. - Highlights: • A part of a nuclear power plant is modeled in the context of dynamic reliability. • Stochastic Hybrid Automaton is used as an input model for Monte Carlo simulations. • The model is formally built using a bottom-up approach. • The behavior of the system is analyzed empirically and analytically. • A formally built SHA shows to be a suitable tool to approach dynamic reliability.

  20. Proceedings of the SRESA national conference on reliability and safety engineering

    International Nuclear Information System (INIS)

    Varde, P.V.; Vaishnavi, P.; Sujatha, S.; Valarmathi, A.

    2014-01-01

    The objective of this conference was to provide a forum for technical discussions on recent developments in the area of risk based approach and Prognostic Health Management of critical systems in decision making. The reliability and safety engineering methods are concerned with the way which the product fails, and the effects of failure is to understand how a product works and assures acceptable levels of safety. The reliability engineering addresses all the anticipated and possibly unanticipated causes of failure to ensure the occurrence of failure is prevented or minimized. The topics discussed in the conference were: Reliability in Engineering Design, Safety Assessment and Management, Reliability analysis and Assessment , Stochastic Petri nets for reliability Modeling, Dynamic Reliability, Reliability Prediction, Hardware Reliability, Software Reliability in Safety Critical Issues, Probabilistic Safety Assessment, Risk Informed Approach, Dynamic Models for Reliability Analysis, Reliability based Design and Analysis, Prognostics and Health Management, Remaining Useful Life (RUL), Human Reliability Modeling, Risk Based Applications, Hazard and Operability Study (HAZOP), Reliability in Network Security and Quality Assurance and Management etc. The papers relevant to INIS are indexed separately

  1. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  2. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  3. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  4. PCA as a practical indicator of OPLS-DA model reliability.

    Science.gov (United States)

    Worley, Bradley; Powers, Robert

    Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.

  5. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  6. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  7. Reliability model for helicopter main gearbox lubrication system using influence diagrams

    International Nuclear Information System (INIS)

    Rashid, H.S.J.; Place, C.S.; Mba, D.; Keong, R.L.C.; Healey, A.; Kleine-Beek, W.; Romano, M.

    2015-01-01

    The loss of oil from a helicopter main gearbox (MGB) leads to increased friction between components, a rise in component surface temperatures, and subsequent mechanical failure of gearbox components. A number of significant helicopter accidents have been caused due to such loss of lubrication. This paper presents a model to assess the reliability of helicopter MGB lubricating systems. Safety risk modeling was conducted for MGB oil system related accidents in order to analyse key failure mechanisms and the contributory factors. Thus, the dominant failure modes for lubrication systems and key contributing components were identified. The Influence Diagram (ID) approach was then employed to investigate reliability issues of the MGB lubrication systems at the level of primary causal factors, thus systematically investigating a complex context of events, conditions, and influences that are direct triggers of the helicopter MGB lubrication system failures. The interrelationships between MGB lubrication system failure types were thus identified, and the influence of each of these factors on the overall MGB lubrication system reliability was assessed. This paper highlights parts of the HELMGOP project, sponsored by the European Aviation Safety Agency to improve helicopter main gearbox reliability. - Highlights: • We investigated methods to optimize helicopter MGB oil system run-dry capability. • Used Influence Diagram to assess design and maintenance factors of MGB oil system. • Factors influencing overall MGB lubrication system reliability were identified. • This globally influences current and future helicopter MGB designs

  8. Development of RBDGG Solver and Its Application to System Reliability Analysis

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    For the purpose of making system reliability analysis easier and more intuitive, RBDGG (Reliability Block diagram with General Gates) methodology was introduced as an extension of the conventional reliability block diagram. The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system, and therefore the modeling of a system for system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar with that of the development of the RGGG (Reliability Graph with General Gates) methodology, which is an extension of a conventional reliability graph. The newly proposed methodology is now implemented into a software tool, RBDGG Solver. RBDGG Solver was developed as a WIN32 console application. RBDGG Solver receives information on the failure modes and failure probabilities of each component in the system, along with the connection structure and connection logics among the components in the system. Based on the received information, RBDGG Solver automatically generates a system reliability analysis model for the system, and then provides the analysis results. In this paper, application of RBDGG Solver to the reliability analysis of an example system, and verification of the calculation results are provided for the purpose of demonstrating how RBDGG Solver is used for system reliability analysis

  9. A general graphical user interface for automatic reliability modeling

    Science.gov (United States)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  10. Reliability and continuous regeneration model

    Directory of Open Access Journals (Sweden)

    Anna Pavlisková

    2006-06-01

    Full Text Available The failure-free function of an object is very important for the service. This leads to the interest in the determination of the object reliability and failure intensity. The reliability of an element is defined by the theory of probability.The element durability T is a continuous random variate with the probability density f. The failure intensity (tλ is a very important reliability characteristics of the element. Often it is an increasing function, which corresponds to the element ageing. We disposed of the data about a belt conveyor failures recorded during the period of 90 months. The given ses behaves according to the normal distribution. By using a mathematical analysis and matematical statistics, we found the failure intensity function (tλ. The function (tλ increases almost linearly.

  11. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  12. Stochastic process corrosion growth models for pipeline reliability

    International Nuclear Information System (INIS)

    Bazán, Felipe Alexander Vargas; Beck, André Teófilo

    2013-01-01

    Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics

  13. Testing the reliability of ice-cream cone model

    Science.gov (United States)

    Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui

    2015-04-01

    Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.

  14. A review of the progress with statistical models of passive component reliability

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, Bengt O. Y. [Sigma-Phase Inc., Vail (United States)

    2017-03-15

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  15. A Review of the Progress with Statistical Models of Passive Component Reliability

    Directory of Open Access Journals (Sweden)

    Bengt O.Y. Lydell

    2017-03-01

    Full Text Available During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  16. A review of the progress with statistical models of passive component reliability

    International Nuclear Information System (INIS)

    Lydell, Bengt O. Y.

    2017-01-01

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models

  17. Imperfect Preventive Maintenance Model Study Based On Reliability Limitation

    Directory of Open Access Journals (Sweden)

    Zhou Qian

    2016-01-01

    Full Text Available Effective maintenance is crucial for equipment performance in industry. Imperfect maintenance conform to actual failure process. Taking the dynamic preventive maintenance cost into account, the preventive maintenance model was constructed by using age reduction factor. The model regards the minimization of repair cost rate as final target. It use allowed smallest reliability as the replacement condition. Equipment life was assumed to follow two parameters Weibull distribution since it was one of the most commonly adopted distributions to fit cumulative failure problems. Eventually the example verifies the rationality and benefits of the model.

  18. Design for Reliability of Power Electronic Systems

    DEFF Research Database (Denmark)

    Wang, Huai; Ma, Ke; Blaabjerg, Frede

    2012-01-01

    Advances in power electronics enable efficient and flexible processing of electric power in the application of renewable energy sources, electric vehicles, adjustable-speed drives, etc. More and more efforts are devoted to better power electronic systems in terms of reliability to ensure high......). A collection of methodologies based on Physics-of-Failure (PoF) approach and mission profile analysis are presented in this paper to perform reliability-oriented design of power electronic systems. The corresponding design procedures and reliability prediction models are provided. Further on, a case study...... on a 2.3 MW wind power converter is discussed with emphasis on the reliability critical components IGBTs. Different aspects of improving the reliability of the power converter are mapped. Finally, the challenges and opportunities to achieve more reliable power electronic systems are addressed....

  19. Reliability Models Applied to a System of Power Converters in Particle Accelerators

    OpenAIRE

    Siemaszko, D; Speiser, M; Pittet, S

    2012-01-01

    Several reliability models are studied when applied to a power system containing a large number of power converters. A methodology is proposed and illustrated in the case study of a novel linear particle accelerator designed for reaching high energies. The proposed methods result in the prediction of both reliability and availability of the considered system for optimisation purposes.

  20. Nuclear power plant reliability database management

    International Nuclear Information System (INIS)

    Meslin, Th.; Aufort, P.

    1996-04-01

    In the framework of the development of a probabilistic safety project on site (notion of living PSA), Saint Laurent des Eaux NPP implements a specific EDF reliability database. The main goals of this project at Saint Laurent des Eaux are: to expand risk analysis and to constitute an effective local basis of thinking about operating safety by requiring the participation of all departments of a power plant: analysis of all potential operating transients, unavailability consequences... that means to go further than a simple culture of applying operating rules; to involve nuclear power plant operators in experience feedback and its analysis, especially by following up behaviour of components and of safety functions; to allow plant safety managers to outline their decisions facing safety authorities for notwithstanding, preventive maintenance programme, operating incident evaluation. To hit these goals requires feedback data, tools, techniques and development of skills. The first step is to obtain specific reliability data on the site. Raw data come from plant maintenance management system which processes all maintenance activities and keeps in memory all the records of component failures and maintenance activities. Plant specific reliability data are estimated with a Bayesian model which combines these validated raw data with corporate generic data. This approach allow to provide reliability data for main components modelled in PSA, to check the consistency of the maintenance program (RCM), to verify hypothesis made at the design about component reliability. A number of studies, related to components reliability as well as decision making process of specific incident risk evaluation have been carried out. This paper provides also an overview of the process management set up on site from raw database to specific reliability database in compliance with established corporate objectives. (authors). 4 figs

  1. Nuclear power plant reliability database management

    Energy Technology Data Exchange (ETDEWEB)

    Meslin, Th [Electricite de France (EDF), 41 - Saint-Laurent-des-Eaux (France); Aufort, P

    1996-04-01

    In the framework of the development of a probabilistic safety project on site (notion of living PSA), Saint Laurent des Eaux NPP implements a specific EDF reliability database. The main goals of this project at Saint Laurent des Eaux are: to expand risk analysis and to constitute an effective local basis of thinking about operating safety by requiring the participation of all departments of a power plant: analysis of all potential operating transients, unavailability consequences... that means to go further than a simple culture of applying operating rules; to involve nuclear power plant operators in experience feedback and its analysis, especially by following up behaviour of components and of safety functions; to allow plant safety managers to outline their decisions facing safety authorities for notwithstanding, preventive maintenance programme, operating incident evaluation. To hit these goals requires feedback data, tools, techniques and development of skills. The first step is to obtain specific reliability data on the site. Raw data come from plant maintenance management system which processes all maintenance activities and keeps in memory all the records of component failures and maintenance activities. Plant specific reliability data are estimated with a Bayesian model which combines these validated raw data with corporate generic data. This approach allow to provide reliability data for main components modelled in PSA, to check the consistency of the maintenance program (RCM), to verify hypothesis made at the design about component reliability. A number of studies, related to components reliability as well as decision making process of specific incident risk evaluation have been carried out. This paper provides also an overview of the process management set up on site from raw database to specific reliability database in compliance with established corporate objectives. (authors). 4 figs.

  2. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  3. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  4. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  5. Human reliability analysis of performing tasks in plants based on fuzzy integral

    International Nuclear Information System (INIS)

    Washio, Takashi; Kitamura, Yutaka; Takahashi, Hideaki

    1991-01-01

    The effective improvement of the human working conditions in nuclear power plants might be a solution for the enhancement of the operation safety. The human reliability analysis (HRA) gives a methodological basis of the improvement based on the evaluation of human reliability under various working conditions. This study investigates some difficulties of the human reliability analysis using conventional linear models and recent fuzzy integral models, and provides some solutions to the difficulties. The following practical features of the provided methods are confirmed in comparison with the conventional methods: (1) Applicability to various types of tasks (2) Capability of evaluating complicated dependencies among working condition factors (3) A priori human reliability evaluation based on a systematic task analysis of human action processes (4) A conversion scheme to probability from indices representing human reliability. (author)

  6. Modeling and reliability analysis of three phase z-source AC-AC converter

    Directory of Open Access Journals (Sweden)

    Prasad Hanuman

    2017-12-01

    Full Text Available This paper presents the small signal modeling using the state space averaging technique and reliability analysis of a three-phase z-source ac-ac converter. By controlling the shoot-through duty ratio, it can operate in buck-boost mode and maintain desired output voltage during voltage sag and surge condition. It has faster dynamic response and higher efficiency as compared to the traditional voltage regulator. Small signal analysis derives different control transfer functions and this leads to design a suitable controller for a closed loop system during supply voltage variation. The closed loop system of the converter with a PID controller eliminates the transients in output voltage and provides steady state regulated output. The proposed model designed in the RT-LAB and executed in a field programming gate array (FPGA-based real-time digital simulator at a fixedtime step of 10 μs and a constant switching frequency of 10 kHz. The simulator was developed using very high speed integrated circuit hardware description language (VHDL, making it versatile and moveable. Hardware-in-the-loop (HIL simulation results are presented to justify the MATLAB simulation results during supply voltage variation of the three phase z-source ac-ac converter. The reliability analysis has been applied to the converter to find out the failure rate of its different components.

  7. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  8. MAPPS (Maintenance Personnel Performance Simulation): a computer simulation model for human reliability analysis

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.

    1985-01-01

    A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA

  9. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  10. Why We Need Reliable, Valid, and Appropriate Learning Disability Assessments: The Perspective of a Postsecondary Disability Service Provider

    Science.gov (United States)

    Wolforth, Joan

    2012-01-01

    This paper discusses issues regarding the validity and reliability of psychoeducational assessments provided to Disability Services Offices at Canadian Universities. Several vignettes illustrate some current issues and the potential consequences when university students are given less than thorough disability evaluations and ascribed diagnoses.…

  11. An analytical framework for reliability growth of one-shot systems

    International Nuclear Information System (INIS)

    Hall, J. Brian; Mosleh, Ali

    2008-01-01

    In this paper, we introduce a new reliability growth methodology for one-shot systems that is applicable to the case where all corrective actions are implemented at the end of the current test phase. The methodology consists of four model equations for assessing: expected reliability, the expected number of failure modes observed in testing, the expected probability of discovering new failure modes, and the expected portion of system unreliability associated with repeat failure modes. These model equations provide an analytical framework for which reliability practitioners can estimate reliability improvement, address goodness-of-fit concerns, quantify programmatic risk, and assess reliability maturity of one-shot systems. A numerical example is given to illustrate the value and utility of the presented approach. This methodology is useful to program managers and reliability practitioners interested in applying the techniques above in their reliability growth program

  12. Research on Connection and Function Reliability of the Oil&Gas Pipeline System

    Directory of Open Access Journals (Sweden)

    Xu Bo

    2017-01-01

    Full Text Available Pipeline transportation is the optimal way for energy delivery in terms of safety, efficiency and environmental protection. Because of the complexity of pipeline external system including geological hazards, social and cultural influence, it is a great challenge to operate the pipeline safely and reliable. Therefore, the pipeline reliability becomes an important issue. Based on the classical reliability theory, the analysis of pipeline system is carried out, then the reliability model of the pipeline system is built, and the calculation is addressed thereafter. Further the connection and function reliability model is applied to a practical active pipeline system, with the use of the proposed methodology of the pipeline system; the connection reliability and function reliability are obtained. This paper firstly presented to considerate the connection and function reliability separately and obtain significant contribution to establish the mathematical reliability model of pipeline system, hence provide fundamental groundwork for the pipeline reliability research in the future.

  13. Automatic creation of Markov models for reliability assessment of safety instrumented systems

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2008-01-01

    After the release of new international functional safety standards like IEC 61508, people care more for the safety and availability of safety instrumented systems. Markov analysis is a powerful and flexible technique to assess the reliability measurements of safety instrumented systems, but it is fallible and time-consuming to create Markov models manually. This paper presents a new technique to automatically create Markov models for reliability assessment of safety instrumented systems. Many safety related factors, such as failure modes, self-diagnostic, restorations, common cause and voting, are included in Markov models. A framework is generated first based on voting, failure modes and self-diagnostic. Then, repairs and common-cause failures are incorporated into the framework to build a complete Markov model. Eventual simplification of Markov models can be done by state merging. Examples given in this paper show how explosively the size of Markov model increases as the system becomes a little more complicated as well as the advancement of automatic creation of Markov models

  14. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  15. Modeling human reliability analysis using MIDAS

    International Nuclear Information System (INIS)

    Boring, R. L.

    2006-01-01

    This paper documents current efforts to infuse human reliability analysis (HRA) into human performance simulation. The Idaho National Laboratory is teamed with NASA Ames Research Center to bridge the SPAR-H HRA method with NASA's Man-machine Integration Design and Analysis System (MIDAS) for use in simulating and modeling the human contribution to risk in nuclear power plant control room operations. It is anticipated that the union of MIDAS and SPAR-H will pave the path for cost-effective, timely, and valid simulated control room operators for studying current and next generation control room configurations. This paper highlights considerations for creating the dynamic HRA framework necessary for simulation, including event dependency and granularity. This paper also highlights how the SPAR-H performance shaping factors can be modeled in MIDAS across static, dynamic, and initiator conditions common to control room scenarios. This paper concludes with a discussion of the relationship of the workload factors currently in MIDAS and the performance shaping factors in SPAR-H. (authors)

  16. Developing safety performance functions incorporating reliability-based risk measures.

    Science.gov (United States)

    Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek

    2011-11-01

    Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Quantification of Wave Model Uncertainties Used for Probabilistic Reliability Assessments of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2015-01-01

    Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... data are collected from published scientific research. The bias and the root-mean-square error, as well as the scatter index, are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example, this paper presents how the quantified...... uncertainties can be implemented in probabilistic reliability assessments....

  18. Determination of Wave Model Uncertainties used for Probabilistic Reliability Assessments of Wave Energy Devices

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2014-01-01

    Wave models used for site assessments are subject to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Considered are four different wave models and validation...... data is collected from published scientific research. The bias, the root-mean-square error as well as the scatter index are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example it is shown how the estimated uncertainties can...... be implemented in probabilistic reliability assessments....

  19. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  20. Observation Likelihood Model Design and Failure Recovery Scheme Toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2010-12-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  1. Reliability assessment for metallized film pulse capacitors with accelerated degradation test

    International Nuclear Information System (INIS)

    Zhao Jianyin; Liu Fang; Xi Wenjun; He Shaobo; Wei Xiaofeng

    2011-01-01

    The high energy density self-healing metallized film pulse capacitor has been applied to all kinds of laser facilities for their power conditioning systems, whose reliability is straightforward affected by the reliability level of capacitors. Reliability analysis of highly reliable devices, such as metallized film capacitors, is a challenge due to cost and time restriction. Accelerated degradation test provides a way to predict its life cost and time effectively. A model and analyses for accelerated degradation data of metallized film capacitors are described. Also described is a method for estimating the distribution of failure time. The estimation values of the unknown parameters in this model are 9.066 9 x 10 -8 and 0.022 1. Both the failure probability density function (PDF) and the cumulative distribution function (CDF) can be presented by this degradation failure model. Based on these estimation values and the PDF/CDF, the reliability model of the metallized film capacitors is obtained. According to the reliability model, the probability of the capacitors surviving to 20 000 shot is 0.972 4. (authors)

  2. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  3. An overview of erosion corrosion models and reliability assessment for corrosion defects in piping system

    International Nuclear Information System (INIS)

    Srividya, A.; Suresh, H.N.; Verma, A.K.; Gopika, V.; Santosh

    2006-01-01

    Piping systems are part of passive structural elements in power plants. The analysis of the piping systems and their quantification in terms of failure probability is of utmost importance. The piping systems may fail due to various degradation mechanisms like thermal fatigue, erosion-corrosion, stress corrosion cracking and vibration fatigue. On examination of previous results, erosion corrosion was more prevalent and wall thinning is a time dependent phenomenon. The paper is intended to consolidate the work done by various investigators on erosion corrosion in estimating the erosion corrosion rate and reliability predictions. A comparison of various erosion corrosion models is made. The reliability predictions based on remaining strength of corroded pipelines by wall thinning is also attempted. Variables in the limit state functions are modelled using normal distributions and Reliability assessment is carried out using some of the existing failure pressure models. A steady state corrosion rate is assumed to estimate the corrosion defect and First Order Reliability Method (FORM) is used to find the probability of failure associated with corrosion defects over time using the software for Component Reliability evaluation (COMREL). (author)

  4. The transparency, reliability and utility of tropical rainforest land-use and land-cover change models.

    Science.gov (United States)

    Rosa, Isabel M D; Ahmed, Sadia E; Ewers, Robert M

    2014-06-01

    Land-use and land-cover (LULC) change is one of the largest drivers of biodiversity loss and carbon emissions globally. We use the tropical rainforests of the Amazon, the Congo basin and South-East Asia as a case study to investigate spatial predictive models of LULC change. Current predictions differ in their modelling approaches, are highly variable and often poorly validated. We carried out a quantitative review of 48 modelling methodologies, considering model spatio-temporal scales, inputs, calibration and validation methods. In addition, we requested model outputs from each of the models reviewed and carried out a quantitative assessment of model performance for tropical LULC predictions in the Brazilian Amazon. We highlight existing shortfalls in the discipline and uncover three key points that need addressing to improve the transparency, reliability and utility of tropical LULC change models: (1) a lack of openness with regard to describing and making available the model inputs and model code; (2) the difficulties of conducting appropriate model validations; and (3) the difficulty that users of tropical LULC models face in obtaining the model predictions to help inform their own analyses and policy decisions. We further draw comparisons between tropical LULC change models in the tropics and the modelling approaches and paradigms in other disciplines, and suggest that recent changes in the climate change and species distribution modelling communities may provide a pathway that tropical LULC change modellers may emulate to further improve the discipline. Climate change models have exerted considerable influence over public perceptions of climate change and now impact policy decisions at all political levels. We suggest that tropical LULC change models have an equally high potential to influence public opinion and impact the development of land-use policies based on plausible future scenarios, but, to do that reliably may require further improvements in the

  5. Modeling Energy & Reliability of a CNT based WSN on an HPC Setup

    Directory of Open Access Journals (Sweden)

    Rohit Pathak

    2010-07-01

    Full Text Available We have analyzed the effect of innovations in Nanotechnology on Wireless Sensor Networks (WSN and have modeled Carbon Nanotube (CNT based sensor nodes from a device prospective. A WSN model has been programmed in Simulink-MATLAB and a library has been developed. Integration of CNT in WSN for various modules such as sensors, microprocessors, batteries etc has been shown. Also average energy consumption for the system has been formulated and its reliability has been shown holistically. A proposition has been put forward on the changes needed in existing sensor node structure to improve its efficiency and to facilitate as well as enhance the assimilation of CNT based devices in a WSN. Finally we have commented on the challenges that exist in this technology and described the important factors that need to be considered for calculating reliability. This research will help in practical implementation of CNT based devices and analysis of their key effects on the WSN environment. The work has been executed on Simulink and Distributive Computing toolbox of MATLAB. The proposal has been compared to the recent developments and past experimental results reported in this field. This attempt to derieve the energy consumption and reliability implications will help in development of real devices using CNT which is a major hurdle in bringing the success from lab to commercial market. Recent research in CNT has been used to model an energy efficient model which will also lead to the development CAD tools. Library for Reliability and Energy consumption includes analysis of various parts of a WSN system which is being constructed from CNT. Nano routing in a CNT system is also implemented with its dependencies. Finally the computations were executed on a HPC setup and the model showed remarkable speedup.

  6. Reliability analysis for new technology-based transmitters

    Energy Technology Data Exchange (ETDEWEB)

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Charpentier, Dominique [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France)

    2011-02-15

    The reliability analysis of new technology-based transmitters has to deal with specific issues: various interactions between both material elements and functions, undefined behaviours under faulty conditions, several transmitted data, and little reliability feedback. To handle these particularities, a '3-step' model is proposed, based on goal tree-success tree (GTST) approaches to represent both the functional and material aspects, and includes the faults and failures as a third part for supporting reliability analyses. The behavioural aspects are provided by relationship matrices, also denoted master logic diagrams (MLD), with stochastic values which represent direct relationships between system elements. Relationship analyses are then proposed to assess the effect of any fault or failure on any material element or function. Taking these relationships into account, the probabilities of malfunction and failure modes are evaluated according to time. Furthermore, uncertainty analyses tend to show that even if the input data and system behaviour are not well known, these previous results can be obtained in a relatively precise way. An illustration is provided by a case study on an infrared gas transmitter. These properties make the proposed model and corresponding reliability analyses especially suitable for intelligent transmitters (or 'smart sensors').

  7. 78 FR 38851 - Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards

    Science.gov (United States)

    2013-06-28

    ... either: Provide little protection for Bulk-Power System reliability or are redundant with other aspects... for retirement either: (1) Provide little protection for Bulk-Power System reliability or (2) are... to assure reliability of the Bulk-Power System and should be withdrawn. We have identified 41...

  8. Model case IRS-RWE for the determination of reliability data in practical operation

    Energy Technology Data Exchange (ETDEWEB)

    Hoemke, P; Krause, H

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. The paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection are described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results.

  9. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  10. A simulation model for reliability evaluation of Space Station power systems

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kumar, Mudit; Wagner, H.

    1988-01-01

    A detailed simulation model for the hybrid Space Station power system is presented which allows photovoltaic and solar dynamic power sources to be mixed in varying proportions. The model considers the dependence of reliability and storage characteristics during the sun and eclipse periods, and makes it possible to model the charging and discharging of the energy storage modules in a relatively accurate manner on a continuous basis.

  11. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  12. Reliability benefits of dispersed wind resource development

    International Nuclear Information System (INIS)

    Milligan, M.; Artig, R.

    1998-05-01

    Generating capacity that is available during the utility peak period is worth more than off-peak capacity. Wind power from a single location might not be available during enough of the peak period to provide sufficient value. However, if the wind power plant is developed over geographically disperse locations, the timing and availability of wind power from these multiple sources could provide a better match with the utility's peak load than a single site. There are other issues that arise when considering disperse wind plant development. Singular development can result in economies of scale and might reduce the costs of obtaining multiple permits and multiple interconnections. However, disperse development can result in cost efficiencies if interconnection can be accomplished at lower voltages or at locations closer to load centers. Several wind plants are in various stages of planning or development in the US. Although some of these are small-scale demonstration projects, significant wind capacity has been developed in Minnesota, with additional developments planned in Wyoming, Iowa and Texas. As these and other projects are planned and developed, there is a need to perform analysis of the value of geographically disperse sites on the reliability of the overall wind plant.This paper uses a production-cost/reliability model to analyze the reliability of several wind sites in the state of Minnesota. The analysis finds that the use of a model with traditional reliability measures does not produce consistent, robust results. An approach based on fuzzy set theory is applied in this paper, with improved results. Using such a model, the authors find that system reliability can be optimized with a mix of disperse wind sites

  13. DIRAC: reliable data management for LHCb

    International Nuclear Information System (INIS)

    Smith, A C; Tsaregorodtsev, A

    2008-01-01

    DIRAC, LHCb's Grid Workload and Data Management System, utilizes WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb's Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for these files in replica and bookkeeping catalogues, allowing dataset selection and localization. The DMS controls the movement of files in a redundant fashion whilst providing utilities for accessing all metadata. To do these tasks effectively the DMS requires complete self integrity between its components and external physical storage. The DMS provides highly redundant management of all LHCb data to leverage available storage resources and to manage transient errors in underlying services. It provides data driven and reliable distribution of files as well as reliable job output upload, utilizing VO Boxes at LHCb Tier1 sites to prevent data loss. This paper presents several examples of mechanisms implemented in the DMS to increase reliability, availability and integrity, highlighting successful design choices and limitations discovered

  14. Validity and reliability of an application review process using dedicated reviewers in one stage of a multi-stage admissions model.

    Science.gov (United States)

    Zeeman, Jacqueline M; McLaughlin, Jacqueline E; Cox, Wendy C

    2017-11-01

    With increased emphasis placed on non-academic skills in the workplace, a need exists to identify an admissions process that evaluates these skills. This study assessed the validity and reliability of an application review process involving three dedicated application reviewers in a multi-stage admissions model. A multi-stage admissions model was utilized during the 2014-2015 admissions cycle. After advancing through the academic review, each application was independently reviewed by two dedicated application reviewers utilizing a six-construct rubric (written communication, extracurricular and community service activities, leadership experience, pharmacy career appreciation, research experience, and resiliency). Rubric scores were extrapolated to a three-tier ranking to select candidates for on-site interviews. Kappa statistics were used to assess interrater reliability. A three-facet Many-Facet Rasch Model (MFRM) determined reviewer severity, candidate suitability, and rubric construct difficulty. The kappa statistic for candidates' tier rank score (n = 388 candidates) was 0.692 with a perfect agreement frequency of 84.3%. There was substantial interrater reliability between reviewers for the tier ranking (kappa: 0.654-0.710). Highest construct agreement occurred in written communication (kappa: 0.924-0.984). A three-facet MFRM analysis explained 36.9% of variance in the ratings, with 0.06% reflecting application reviewer scoring patterns (i.e., severity or leniency), 22.8% reflecting candidate suitability, and 14.1% reflecting construct difficulty. Utilization of dedicated application reviewers and a defined tiered rubric provided a valid and reliable method to effectively evaluate candidates during the application review process. These analyses provide insight into opportunities for improving the application review process among schools and colleges of pharmacy. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  16. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  17. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  18. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  19. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  20. Creation and Reliability Analysis of Vehicle Dynamic Weighing Model

    Directory of Open Access Journals (Sweden)

    Zhi-Ling XU

    2014-08-01

    Full Text Available In this paper, it is modeled by using ADAMS to portable axle load meter of dynamic weighing system, controlling a single variable simulation weighing process, getting the simulation weighing data under the different speed and weight; simultaneously using portable weighing system with the same parameters to achieve the actual measurement, comparative analysis the simulation results under the same conditions, at 30 km/h or less, the simulation value and the measured value do not differ by more than 5 %, it is not only to verify the reliability of dynamic weighing model, but also to create possible for improving algorithm study efficiency by using dynamic weighing model simulation.

  1. Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems

    International Nuclear Information System (INIS)

    Tien, Iris; Der Kiureghian, Armen

    2016-01-01

    Novel algorithms are developed to enable the modeling of large, complex infrastructure systems as Bayesian networks (BNs). These include a compression algorithm that significantly reduces the memory storage required to construct the BN model, and an updating algorithm that performs inference on compressed matrices. These algorithms address one of the major obstacles to widespread use of BNs for system reliability assessment, namely the exponentially increasing amount of information that needs to be stored as the number of components in the system increases. The proposed compression and inference algorithms are described and applied to example systems to investigate their performance compared to that of existing algorithms. Orders of magnitude savings in memory storage requirement are demonstrated using the new algorithms, enabling BN modeling and reliability analysis of larger infrastructure systems. - Highlights: • Novel algorithms developed for Bayesian network modeling of infrastructure systems. • Algorithm presented to compress information in conditional probability tables. • Updating algorithm presented to perform inference on compressed matrices. • Algorithms applied to example systems to investigate their performance. • Orders of magnitude savings in memory storage requirement demonstrated.

  2. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  3. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    Science.gov (United States)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  4. SIERRA - A 3-D device simulator for reliability modeling

    Science.gov (United States)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  5. A reliability model of a warm standby configuration with two identical sets of units

    International Nuclear Information System (INIS)

    Huang, Wei; Loman, James; Song, Thomas

    2015-01-01

    This article presents a new reliability model and the development of its analytical solution for a warm standby redundant configuration with units that are originally operated in active mode, and then, upon turn-on of originally standby units, are put into warm standby mode. These units can be used later if a standby- turned into active-unit fails. Numerical results of an example configuration are presented and discussed with comparison to other warm standby configurations, and to Monte Carlo simulation results obtained from BlockSim software. Results show that the Monte Carlo simulation model gives virtually identical reliability value when the simulation uses a high number of replications, confirming the developed model. - Highlights: • A new reliability model is developed for a warm standby redundancy with two sets of identical units. • The units subject to state change from active to standby then back to active mode. • A closed form analytical solution is developed with exponential distribution. • To validate the developed model, a Monte Carlo simulation for an exemplary configuration is performed

  6. Reliability modeling of degradation of products with multiple performance characteristics based on gamma processes

    International Nuclear Information System (INIS)

    Pan Zhengqiang; Balakrishnan, Narayanaswamy

    2011-01-01

    Many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. In certain physical situations, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and non-negative increments properties. In this paper, we suppose that a product has two dependent performance characteristics and that their degradation can be modeled by gamma processes. For such a bivariate degradation involving two performance characteristics, we propose to use a bivariate Birnbaum-Saunders distribution and its marginal distributions to approximate the reliability function. Inferential method for the corresponding model parameters is then developed. Finally, for an illustration of the proposed model and method, a numerical example about fatigue cracks is discussed and some computational results are presented.

  7. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    Science.gov (United States)

    Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William

    2009-01-01

    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

  8. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  9. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  10. Calculating system reliability with SRFYDO

    Energy Technology Data Exchange (ETDEWEB)

    Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  11. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  12. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  13. Composite reliability evaluation for transmission network planning

    Directory of Open Access Journals (Sweden)

    Jiashen Teh

    2018-01-01

    Full Text Available As the penetration of wind power into the power system increases, the ability to assess the reliability impact of such interaction becomes more important. The composite reliability evaluations involving wind energy provide ample opportunities for assessing the benefits of different wind farm connection points. A connection to the weak area of the transmission network will require network reinforcement for absorbing the additional wind energy. Traditionally, the reinforcements are performed by constructing new transmission corridors. However, a new state-of-art technology such as the dynamic thermal rating (DTR system, provides new reinforcement strategy and this requires new reliability assessment method. This paper demonstrates a methodology for assessing the cost and the reliability of network reinforcement strategies by considering the DTR systems when large scale wind farms are connected to the existing power network. Sequential Monte Carlo simulations were performed and all DTRs and wind speed were simulated using the auto-regressive moving average (ARMA model. Various reinforcement strategies were assessed from their cost and reliability aspects. Practical industrial standards are used as guidelines when assessing costs. Due to this, the proposed methodology in this paper is able to determine the optimal reinforcement strategies when both the cost and reliability requirements are considered.

  14. Research on cognitive reliability model for main control room considering human factors in nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Jianjun; Zhang Li; Wang Yiqun; Zhang Kun; Peng Yuyuan; Zhou Cheng

    2012-01-01

    Facing the shortcomings of the traditional cognitive factors and cognitive model, this paper presents a Bayesian networks cognitive reliability model by taking the main control room as a reference background and human factors as the key points. The model mainly analyzes the cognitive reliability affected by the human factors, and for the cognitive node and influence factors corresponding to cognitive node, a series of methods and function formulas to compute the node cognitive reliability is proposed. The model and corresponding methods can be applied to the evaluation of cognitive process for the nuclear power plant operators and have a certain significance for the prevention of safety accidents in nuclear power plants. (authors)

  15. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  16. Modelling of nuclear power plant control and instrumentation elements for automatic disturbance and reliability analysis

    International Nuclear Information System (INIS)

    Hollo, E.

    1985-08-01

    Present Final Report summarizes results of R/D work done within IAEA-VEIKI (Institute for Electrical Power Research, Budapest, Hungary) Research Contract No. 3210 during 3 years' period of 01.08.1982 - 31.08.1985. Chapter 1 lists main research objectives of the project. Main results obtained are summarized in Chapters 2 and 3. Outcomes from development of failure modelling methodologies and their application for C/I components of WWER-440 units are as follows (Chapter 2): improvement of available ''failure mode and effect analysis'' methods and mini-fault tree structures usable for automatic disturbance (DAS) and reliability (RAS) analysis; general classification and determination of functional failure modes of WWER-440 NPP C/I components; set up of logic models for motor operated control valves and rod control/drive mechanism. Results of development of methods and their application for reliability modelling of NPP components and systems cover (Chapter 3): development of an algorithm (computer code COMPREL) for component-related failure and reliability parameter calculation; reliability analysis of PAKS II NPP diesel system; definition of functional requirements for reliability data bank (RDB) in WWER-440 units. Determination of RDB input/output data structure and data manipulation services. Methods used are a-priori failure mode and effect analysis, combined fault tree/event tree modelling technique, structural computer programming, probability theory application to nuclear field

  17. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    OpenAIRE

    Alaa F. Sheta; Amal Abdel-Raouf

    2016-01-01

    In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...

  18. Phoenix – A model-based Human Reliability Analysis methodology: Qualitative Analysis Procedure

    International Nuclear Information System (INIS)

    Ekanem, Nsimah J.; Mosleh, Ali; Shen, Song-Hua

    2016-01-01

    Phoenix method is an attempt to address various issues in the field of Human Reliability Analysis (HRA). Built on a cognitive human response model, Phoenix incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. Its original framework was introduced in previous publications. This paper reports on the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “Crew Response Tree” which provides a structure for capturing the context associated with Human Failure Events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro-cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be adapted for use across different environments including nuclear, oil and gas, aerospace, aviation, and healthcare. - Highlights: • Produces a detailed, consistent, traceable, reproducible and properly documented HRA. • Uses “Crew Response Tree” to capture context associated with Human Failure Events. • Models dependencies between Human Failure Events and influencing factors. • Provides a human performance model for relating context to performance. • Provides a framework for relating Crew Failure Modes to its influencing factors.

  19. Assessment of the human factor in the quantification of technical system reliability taking into consideration cognitive-causal aspects. Partial project 2. Modeling of the human behavior for reliability considerations. Final report

    International Nuclear Information System (INIS)

    Jennerich, Marco; Imbsweiler, Jonas; Straeter, Oliver; Arenius, Marcus

    2015-03-01

    This report presents the findings of the project for the consideration of human factor in the quantification of the reliability of technical systems, taking into account cognitive-causal aspects concerning the modeling of human behavior of reliability issues (funded by the Federal Ministry of Economics and Technology; grant number 15014328). This project is part of a joint project with the University of Applied Sciences Zittau / Goerlitz for assessing the human factor in the quantification of the reliability of technical systems. The concern of the University of Applied Sciences Zittau / Goerlitz is the mathematical modeling of human reliability by means of a fuzzy set approach (grant number 1501432A). The part of the project presented here provides the necessary data basis for the evaluation of the mathematical modeling using fuzzy set approach. At the appropriate places in this report, the interfaces and data bases between the two projects are outlined accordingly. HRA-methods (Human Reliability Analysis) are an essential component to analyze the reliability of socio-technical systems. Various methods have been established and are used in different areas of application. The established HRA methods have been checked on their congruence. In particular the underlying models and their parameters such as performance-influencing factors and situational influences have been investigated. The elaborated parameters were combined into a hierarchical class structure. Cross-domain incidents were studied. The specific performance-influencing factors have been worked out and have been integrated into a cross-domain database. The dominant (critical) situational factors and their interactions within the event data were identified using the CAHR method (connectionism Assessment of Human Reliability). Task dependent cognitive load profiles have been defined. Within these profiles qualitative and quantitative data of the possibility of emergence of errors have been acquired. This

  20. Human reliability-based MC and A models for detecting insider theft

    International Nuclear Information System (INIS)

    Duran, Felicia Angelica; Wyss, Gregory Dane

    2010-01-01

    Material control and accounting (MC and A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC and A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC and A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC and A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.

  1. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  2. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    Science.gov (United States)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those

  3. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    International Nuclear Information System (INIS)

    Naimi, Yaghoob; Naimi, Mohammad

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α (1) and α (2) for SS and SR interactions, respectively. The effect of variation of α (1) and α (2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor. (interdisciplinary physics and related areas of science and technology)

  4. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  5. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  6. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2013-03-01

    Full Text Available Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011 analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009 for the Last Glacial Maximum (LGM, 21 ka BP. Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011, and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land–sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  7. Assessment of leg muscles mechanical capacities: Which jump, loading, and variable type provide the most reliable outcomes?

    Science.gov (United States)

    García-Ramos, Amador; Feriche, Belén; Pérez-Castilla, Alejandro; Padial, Paulino; Jaric, Slobodan

    2017-07-01

    This study aimed to explore the strength of the force-velocity (F-V) relationship of lower limb muscles and the reliability of its parameters (maximum force [F 0 ], slope [a], maximum velocity [V 0 ], and maximum power [P 0 ]). Twenty-three men were tested in two different jump types (squat and countermovement jump: SJ and CMJ), performed under two different loading conditions (free weight and Smith machine: Free and Smith) with 0, 17, 30, 45, 60, and 75 kg loads. The maximum and averaged values of F and V were obtained for the F-V relationship modelling. All F-V relationships were strong and linear independently whether observed from the averaged across the participants (r ≥ 0.98) or individual data (r = 0.94-0.98), while their parameters were generally highly reliable (F 0 [CV: 4.85%, ICC: 0.87], V 0 [CV: 6.10%, ICC: 0.82], a [CV: 10.5%, ICC: 0.81], and P 0 [CV: 3.5%, ICC: 0.93]). Both the strength of the F-V relationships and the reliability of their parameters were significantly higher for (1) the CMJ over the SJ, (2) the Free over the Smith loading type, and (3) the maximum over the averaged F and V variables. In conclusion, although the F-V relationships obtained from all the jumps tested were linear and generally highly reliable, the less appropriate choice for testing the F-V relationship could be through the averaged F and V data obtained from the SJ performed either in a Free weight or in a Smith machine. Insubstantial differences exist among the other combinations tested.

  8. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  9. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The

  10. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  11. Extended object-oriented Petri net model for mission reliability simulation of repairable PMS with common cause failures

    International Nuclear Information System (INIS)

    Wu, Xin-yang; Wu, Xiao-Yue

    2015-01-01

    Phased Mission Systems (PMS) have several phases with different success criteria. Generally, traditional analytical methods need to make some assumptions when they are applied for reliability evaluation and analysis of complex PMS, for example, the components are non-repairable or components are not subjected to common cause failures (CCF). However, the evaluation and analysis results may be inapplicable when the assumptions do not agree with practical situation. In this article, we propose an extended object-oriented Petri net (EOOPN) model for mission reliability simulation of repairable PMS with CCFs. Based on object-oriented Petri net (OOPN), EOOPN defines four reusable sub-models to depict PMS at system, phase, or component levels respectively, logic transitions to depict complex components reliability logics in a more readable form, and broadcast place to transmit shared information among components synchronously. After extension, EOOPN could deal with repairable PMS with both external and internal CCFs conveniently. The mission reliability modelling, simulation and analysis using EOOPN are illustrated by a PMS example. The results demonstrate that the proposed EOOPN model is effective. - Highlights: • EOOPN model was effective in reliability simulation for repairable PMS with CCFs. • EOOPN had modular and hierarchical structure. • New elements of EOOPN made the modelling process more convenient and friendlier. • EOOPN had better model reusability and readability than other PNs

  12. Knowledge modelling and reliability processing: presentation of the Figaro language and associated tools

    International Nuclear Information System (INIS)

    Bouissou, M.; Villatte, N.; Bouhadana, H.; Bannelier, M.

    1991-12-01

    EDF has been developing for several years an integrated set of knowledge-based and algorithmic tools for automation of reliability assessment of complex (especially sequential) systems. In this environment, the reliability expert has at his disposal all the powerful software tools for qualitative and quantitative processing, besides he gets various means to generate automatically the inputs for these tools, through the acquisition of graphical data. The development of these tools has been based on FIGARO, a specific language, which was built to get an homogeneous system modelling. Various compilers and interpreters get a FIGARO model into conventional models, such as fault-trees, Markov chains, Petri Networks. In this report, we introduce the main basics of FIGARO language, illustrating them with examples

  13. Reliability Assessment Of Wind Turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2014-01-01

    Reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources. Therefore the turbine components should be designed to have sufficient reliability but also not be too costly (and safe). This paper presents models...... for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades, substructure and foundation. But since the function of a wind turbine is highly dependent on many electrical and mechanical components as well as a control system also reliability aspects...... of these components are discussed and it is described how there reliability influences the reliability of the structural components. Two illustrative examples are presented considering uncertainty modeling, reliability assessment and calibration of partial safety factors for structural wind turbine components exposed...

  14. A new lifetime estimation model for a quicker LED reliability prediction

    Science.gov (United States)

    Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.

    2014-09-01

    LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.

  15. The precision and reliability evaluation of 3-dimensional printed damaged bone and prosthesis models by stereo lithography appearance.

    Science.gov (United States)

    Zou, Yun; Han, Qing; Weng, Xisheng; Zou, Yongwei; Yang, Yingying; Zhang, Kesong; Yang, Kerong; Xu, Xiaolin; Wang, Chenyu; Qin, Yanguo; Wang, Jincheng

    2018-02-01

    Recently, clinical application of 3D printed model was increasing. However, there was no systemic study for confirming the precision and reliability of 3D printed model. Some senior clinical doctors mistrusted its reliability in clinical application. The purpose of this study was to evaluate the precision and reliability of stereolithography appearance (SLA) 3D printed model.Some related parameters were selected to research the reliability of SLA 3D printed model. The computed tomography (CT) data of bone/prosthesis and model were collected and 3D reconstructed. Some anatomical parameters were measured and statistical analysis was performed; the intraclass correlation coefficient (ICC) was used to was used to evaluate the similarity between the model and real bone/prosthesis. the absolute difference (mm) and relative difference (%) were conducted. For prosthesis model, the 3-dimensional error was measured.There was no significant difference in the anatomical parameters except max height (MH) of long bone. All the ICCs were greater than 0.990. The maximum absolute and relative difference were 0.45 mm and 1.10%; The 3-dimensional error analysis showed that positive/minus distance were 0.273 mm/0.237 mm.The application of SLA 3D printed model in diagnosis and treatment process of complex orthopedic disease was reliable and precise.

  16. Understanding software faults and their role in software reliability modeling

    Science.gov (United States)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the

  17. Power Electronic Packaging Design, Assembly Process, Reliability and Modeling

    CERN Document Server

    Liu, Yong

    2012-01-01

    Power Electronic Packaging presents an in-depth overview of power electronic packaging design, assembly,reliability and modeling. Since there is a drastic difference between IC fabrication and power electronic packaging, the book systematically introduces typical power electronic packaging design, assembly, reliability and failure analysis and material selection so readers can clearly understand each task's unique characteristics. Power electronic packaging is one of the fastest growing segments in the power electronic industry, due to the rapid growth of power integrated circuit (IC) fabrication, especially for applications like portable, consumer, home, computing and automotive electronics. This book also covers how advances in both semiconductor content and power advanced package design have helped cause advances in power device capability in recent years. The author extrapolates the most recent trends in the book's areas of focus to highlight where further improvement in materials and techniques can d...

  18. A reliability model for interlayer dielectric cracking during fast thermal cycling

    NARCIS (Netherlands)

    Nguyen, Van Hieu; Salm, Cora; Krabbenborg, B.H.; Krabbenborg, B.H.; Bisschop, J.; Mouthaan, A.J.; Kuper, F.G.; Ray, Gary W.; Smy, Tom; Ohta, Tomohiro; Tsujimura, Manabu

    2003-01-01

    Interlayer dielectric (ILD) cracking can result in short circuits of multilevel interconnects. This paper presents a reliability model for ILD cracking induced by fast thermal cycling (FTC) stress. FTC tests have been performed under different temperature ranges (∆T) and minimum temperatures (Tmin).

  19. Analysis of Parking Reliability Guidance of Urban Parking Variable Message Sign System

    OpenAIRE

    Zhenyu Mei; Ye Tian; Dongping Li

    2012-01-01

    Operators of parking guidance and information systems (PGIS) often encounter difficulty in determining when and how to provide reliable car park availability information to drivers. Reliability has become a key factor to ensure the benefits of urban PGIS. The present paper is the first to define the guiding parking reliability of urban parking variable message signs (VMSs). By analyzing the parking choice under guiding and optional parking lots, a guiding parking reliability model was constru...

  20. Risk and reliability assessment for telecommunications networks

    Energy Technology Data Exchange (ETDEWEB)

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-08-01

    Sandia National Laboratories has assembled an interdisciplinary team to explore the applicability of probabilistic logic modeling (PLM) techniques to model network reliability for a wide variety of communications network architectures. The authors have found that the reliability and failure modes of current generation network technologies can be effectively modeled using fault tree PLM techniques. They have developed a ``plug-and-play`` fault tree analysis methodology that can be used to model connectivity and the provision of network services in a wide variety of current generation network architectures. They have also developed an efficient search algorithm that can be used to determine the minimal cut sets of an arbitrarily-interconnected (non-hierarchical) network without the construction of a fault tree model. This paper provides an overview of these modeling techniques and describes how they are applied to networks that exhibit hybrid network structures (i.e., a network in which some areas are hierarchical and some areas are not hierarchical).

  1. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  2. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  3. THE SIMULATION DIAGNOSTIC METHODS AND REGENERATION WAYS OF REINFORCED - CONCRETE CONSTRUCTIONS OF BRIDGES IN PROVIDING THEIR OPERATING RELIABILITY AND LONGEVITY

    OpenAIRE

    B. V. Savchinskiy

    2010-01-01

    On the basis of analysis of existing diagnostic methods and regeneration ways of reinforced-concrete constructions of bridges the recommendations on introduction of new modern technologies of renewal of reinforced-concrete constructions of bridges in providing their operating reliability and longevity are offered.

  4. The model case IRS-RWE for the determination of reliability data in practical operation

    International Nuclear Information System (INIS)

    Hoemke, P.; Krause, H.

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. This paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection will be described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results. (orig.) [de

  5. Using the graphs models for evaluating in-core monitoring systems reliability by the method of imiting simulaton

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Zyuzin, N.N.; Levin, G.L.; Chesnokov, A.N.

    1987-01-01

    An approach for estimation of reliability factors of complex reserved systems at early stages of development using the method of imitating simulation is considered. Different types of models, their merits and lacks are given. Features of in-core monitoring systems and advosability of graph model and graph theory element application for estimating reliability of such systems are shown. The results of investigation of the reliability factors of the reactor monitoring, control and core local protection subsystem are shown

  6. The value of reliability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Karlström, Anders

    2010-01-01

    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...... of the form of the standardised distribution of trip durations. This insight provides a unification of the scheduling model and models that include the standard deviation of trip duration directly as an argument in the cost or utility function. The results generalise approximately to the case where the mean...

  7. Reliability in automotive ethernet networks

    DEFF Research Database (Denmark)

    Soares, Fabio L.; Campelo, Divanilson R.; Yan, Ying

    2015-01-01

    This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular.......This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular....

  8. Assessing Reliability of Cellulose Hydrolysis Models to Support Biofuel Process Design – Identifiability and Uncertainty Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Meyer, Anne S.; Gernaey, Krist

    2010-01-01

    The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done in the ori......The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done...

  9. Designing reliable supply chain network with disruption risk

    Directory of Open Access Journals (Sweden)

    Ali Bozorgi Amiri

    2013-01-01

    Full Text Available Although supply chains disruptions rarely occur, their negative effects are prolonged and severe. In this paper, we propose a reliable capacitated supply chain network design (RSCND model by considering random disruptions in both distribution centers and suppliers. The proposed model determines the optimal location of distribution centers (DC with the highest reliability, the best plan to assign customers to opened DCs and assigns opened DCs to suitable suppliers with lowest transportation cost. In this study, random disruption occurs at the location, capacity of the distribution centers (DCs and suppliers. It is assumed that a disrupted DC and a disrupted supplier may lose a portion of their capacities, and the rest of the disrupted DC's demand can be supplied by other DCs. In addition, we consider shortage in DCs, which can occur in either normal or disruption conditions and DCs, can support each other in such circumstances. Unlike other studies in the extent of literature, we use new approach to model the reliability of DCs; we consider a range of reliability instead of using binary variables. In order to solve the proposed model for real-world instances, a Non-dominated Sorting Genetic Algorithm-II (NSGA-II is applied. Preliminary results of testing the proposed model of this paper on several problems with different sizes provide seem to be promising.

  10. Analysis of information security reliability: A tutorial

    International Nuclear Information System (INIS)

    Kondakci, Suleyman

    2015-01-01

    This article presents a concise reliability analysis of network security abstracted from stochastic modeling, reliability, and queuing theories. Network security analysis is composed of threats, their impacts, and recovery of the failed systems. A unique framework with a collection of the key reliability models is presented here to guide the determination of the system reliability based on the strength of malicious acts and performance of the recovery processes. A unique model, called Attack-obstacle model, is also proposed here for analyzing systems with immunity growth features. Most computer science curricula do not contain courses in reliability modeling applicable to different areas of computer engineering. Hence, the topic of reliability analysis is often too diffuse to most computer engineers and researchers dealing with network security. This work is thus aimed at shedding some light on this issue, which can be useful in identifying models, their assumptions and practical parameters for estimating the reliability of threatened systems and for assessing the performance of recovery facilities. It can also be useful for the classification of processes and states regarding the reliability of information systems. Systems with stochastic behaviors undergoing queue operations and random state transitions can also benefit from the approaches presented here. - Highlights: • A concise survey and tutorial in model-based reliability analysis applicable to information security. • A framework of key modeling approaches for assessing reliability of networked systems. • The framework facilitates quantitative risk assessment tasks guided by stochastic modeling and queuing theory. • Evaluation of approaches and models for modeling threats, failures, impacts, and recovery analysis of information systems

  11. Assessment of Electronic Circuits Reliability Using Boolean Truth Table Modeling Method

    International Nuclear Information System (INIS)

    EI-Shanshoury, A.I.

    2011-01-01

    This paper explores the use of Boolean Truth Table modeling Method (BTTM) in the analysis of qualitative data. It is widely used in certain fields especially in the fields of electrical and electronic engineering. Our work focuses on the evaluation of power supply circuit reliability using (BTTM) which involves systematic attempts to falsify and identify hypotheses on the basis of truth tables constructed from qualitative data. Reliability parameters such as the system's failure rates for the power supply case study are estimated. All possible state combinations (operating and failed states) of the major components in the circuit were listed and their effects on overall system were studied

  12. A competing risk model for the reliability of cylinder liners in marine Diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Bocchetti, D. [Grimaldi Group, Naples (Italy); Giorgio, M. [Department of Aerospace and Mechanical Engineering, Second University of Naples, Aversa (Italy); Guida, M. [Department of Information Engineering and Electrical Engineering, University of Salerno, Fisciano (Italy); Pulcini, G. [Istituto Motori, National Research Council-CNR, Naples (Italy)], E-mail: g.pulcini@im.cnr.it

    2009-08-15

    In this paper, a competing risk model is proposed to describe the reliability of the cylinder liners of a marine Diesel engine. Cylinder liners presents two dominant failure modes: wear degradation and thermal cracking. The wear process is described through a stochastic process, whereas the failure time due to the thermal cracking is described by the Weibull distribution. The use of the proposed model allows performing goodness-of-fit test and parameters estimation on the basis of both wear and failure data. Moreover, it enables reliability estimates of the state of the liners to be obtained and the hierarchy of the failure mechanisms to be determined for any given age and wear level of the liner. The model has been applied to a real data set: 33 cylinder liners of Sulzer RTA 58 engines, which equip twin ships of the Grimaldi Group. Estimates of the liner reliability and of other quantities of interest under the competing risk model are obtained, as well as the conditional failure probability and mean residual lifetime, given the survival age and the accumulated wear. Furthermore, the model has been used to estimate the probability that a liner fails due to one of the failure modes when both of these modes act.

  13. Stochastic network interdiction optimization via capacitated network reliability modeling and probabilistic solution discovery

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose Emmanuel; Rocco S, Claudio M.

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve stochastic network interdiction problems (SNIP). The network interdiction problem solved considers the minimization of the cost associated with an interdiction strategy such that the maximum flow that can be transmitted between a source node and a sink node for a fixed network design is greater than or equal to a given reliability requirement. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link and that such interdiction has a probability of being successful. This version of the SNIP is for the first time modeled as a capacitated network reliability problem allowing for the implementation of computation and solution techniques previously unavailable. The solution process is based on an evolutionary algorithm that implements: (1) Monte-Carlo simulation, to generate potential network interdiction strategies, (2) capacitated network reliability techniques to analyze strategies' source-sink flow reliability and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks are used throughout the paper to illustrate the approach

  14. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    Science.gov (United States)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  15. Cost-effective solutions to maintaining smart grid reliability

    Science.gov (United States)

    Qin, Qiu

    As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event

  16. Analysis of time-dependent reliability of degenerated reinforced concrete structure

    Directory of Open Access Journals (Sweden)

    Zhang Hongping

    2016-07-01

    Full Text Available Durability deterioration of structure is a highly random process. The maintenance of degenerated structure involves the calculation of the reliability of time-dependent structure. This study introduced reinforced concrete structure resistance decrease model and related statistical parameters of uncertainty, analyzed resistance decrease rules of corroded bending element of reinforced concrete structure, and finally calculated timedependent reliability of the corroded bending element of reinforced concrete structure, aiming to provide a specific theoretical basis for the application of time-dependent reliability theory.

  17. Influence Of Inspection Intervals On Mechanical System Reliability

    International Nuclear Information System (INIS)

    Zilberman, B.

    1998-01-01

    In this paper a methodology of reliability analysis of mechanical systems with latent failures is described. Reliability analysis of such systems must include appropriate usage of check intervals for latent failure detection. The methodology suggests, that based on system logic the analyst decides at the beginning if a system can fail actively or latently and propagates this approach through all system levels. All inspections are assumed to be perfect (all failures are detected and repaired and no new failures are introduced as a result of the maintenance). Additional assumptions are that mission time is much smaller, than check intervals and all components have constant failure rates. Analytical expressions for reliability calculates are provided, based on fault tree and Markov modeling techniques (for two and three redundant systems with inspection intervals). The proposed methodology yields more accurate results than are obtained by not using check intervals or using half check interval times. The conventional analysis assuming that at the beginning of each mission system is as new, give an optimistic prediction of system reliability. Some examples of reliability calculations of mechanical systems with latent failures and establishing optimum check intervals are provided

  18. Reliability Analysis on NPP's Safety-Related Control Module with Field Data

    International Nuclear Information System (INIS)

    Lee, Sang Yong; Jung, Jae Hyun; Kim, Seong Hun

    2006-01-01

    The automatic control systems used in nuclear power plant (NPP) consists of numerous control modules that can be considered to be a network of components various complex ways. The control modules require relatively high reliability than industrial electronic products. Reliability prediction provides the rational basis of system designs and also provides the safety significance of system operations. The aim of this paper is to minimize the deficiencies of the traditional reliability prediction method calculation using the available field return data. This way is possible to do more realistic reliability assessment. SAMCHANG Enterprise Company (SEC) has established database containing high quality data at the module and component level from module maintenance in NPP. On the basis of these, this paper compares results that add failure record (field data) to Telcordia-SR-332 reliability prediction model with MIL-HDBK-217F prediction results

  19. Reliability Evaluation of Service-Oriented Architecture Systems Considering Fault-Tolerance Designs

    Directory of Open Access Journals (Sweden)

    Kuan-Li Peng

    2014-01-01

    strategies. Sensitivity analysis of SOA at both coarse and fine grain levels is also studied, which can be used to efficiently identify the critical parts within the system. Two SOA system scenarios based on real industrial practices are studied. Experimental results show that the proposed SOA model can be used to accurately depict the behavior of SOA systems. Additionally, a sensitivity analysis that quantizes the effects of system structure as well as fault tolerance on the overall reliability is also studied. On the whole, the proposed reliability modeling and analysis framework may help the SOA system service provider to evaluate the overall system reliability effectively and also make smarter improvement plans by focusing resources on enhancing reliability-sensitive parts within the system.

  20. A data-informed PIF hierarchy for model-based Human Reliability Analysis

    International Nuclear Information System (INIS)

    Groth, Katrina M.; Mosleh, Ali

    2012-01-01

    This paper addresses three problems associated with the use of Performance Shaping Factors in Human Reliability Analysis. (1) There are more than a dozen Human Reliability Analysis (HRA) methods that use Performance Influencing Factors (PIFs) or Performance Shaping Factors (PSFs) to model human performance, but there is not a standard set of PIFs used among the methods, nor is there a framework available to compare the PIFs used in various methods. (2) The PIFs currently in use are not defined specifically enough to ensure consistent interpretation of similar PIFs across methods. (3) There are few rules governing the creation, definition, and usage of PIF sets. This paper introduces a hierarchical set of PIFs that can be used for both qualitative and quantitative HRA. The proposed PIF set is arranged in a hierarchy that can be collapsed or expanded to meet multiple objectives. The PIF hierarchy has been developed with respect to a set fundamental principles necessary for PIF sets, which are also introduced in this paper. This paper includes definitions of the PIFs to allow analysts to map the proposed PIFs onto current and future HRA methods. The standardized PIF hierarchy will allow analysts to combine different types of data and will therefore make the best use of the limited data in HRA. The collapsible hierarchy provides the structure necessary to combine multiple types of information without reducing the quality of the information.

  1. A general software reliability process simulation technique

    Science.gov (United States)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  2. THE SIMULATION DIAGNOSTIC METHODS AND REGENERATION WAYS OF REINFORCED - CONCRETE CONSTRUCTIONS OF BRIDGES IN PROVIDING THEIR OPERATING RELIABILITY AND LONGEVITY

    Directory of Open Access Journals (Sweden)

    B. V. Savchinskiy

    2010-03-01

    Full Text Available On the basis of analysis of existing diagnostic methods and regeneration ways of reinforced-concrete constructions of bridges the recommendations on introduction of new modern technologies of renewal of reinforced-concrete constructions of bridges in providing their operating reliability and longevity are offered.

  3. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  4. Non-periodic preventive maintenance with reliability thresholds for complex repairable systems

    International Nuclear Information System (INIS)

    Lin, Zu-Liang; Huang, Yeu-Shiang; Fang, Chih-Chiang

    2015-01-01

    In general, a non-periodic condition-based PM policy with different condition variables is often more effective than a periodic age-based policy for deteriorating complex repairable systems. In this study, system reliability is estimated and used as the condition variable, and three reliability-based PM models are then developed with consideration of different scenarios which can assist in evaluating the maintenance cost for each scenario. The proposed approach provides the optimal reliability thresholds and PM schedules in advance by which the system availability and quality can be ensured and the organizational resources can be well prepared and managed. The results of the sensitivity anlysis indicate that PM activities performed at a high reliability threshold can not only significantly improve the system availability but also efficiently extend the system lifetime, although such a PM strategy is more costly than that for a low reliabiltiy threshold. The optimal reliability threshold increases along with the number of PM activities to prevent future breakdowns caused by severe deterioration, and thus substantially reduces repair costs. - Highlights: • The PM problems for repairable deteriorating systems are formulated. • The structural properties of the proposed PM models are investigated. • The corresponding algorithms to find the optimal PM strategies are provided. • Imperfect PM activities are allowed to reduce the occurences of breakdowns. • Provide managers with insights about the critical factors in the planning stage

  5. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    Science.gov (United States)

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  6. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  7. Development of reliable pavement models.

    Science.gov (United States)

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  8. Customer-Provider Strategic Alignment: A Maturity Model

    Science.gov (United States)

    Luftman, Jerry; Brown, Carol V.; Balaji, S.

    This chapter presents a new model for assessing the maturity of a ­customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.

  9. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  10. Modeling and simulation for microelectronic packaging assembly manufacturing, reliability and testing

    CERN Document Server

    Liu, Sheng

    2011-01-01

    Although there is increasing need for modeling and simulation in the IC package design phase, most assembly processes and various reliability tests are still based on the time consuming ""test and try out"" method to obtain the best solution. Modeling and simulation can easily ensure virtual Design of Experiments (DoE) to achieve the optimal solution. This has greatly reduced the cost and production time, especially for new product development. Using modeling and simulation will become increasingly necessary for future advances in 3D package development.  In this book, Liu and Liu allow people

  11. DIRAC reliable data management for LHCb

    CERN Document Server

    Smith, A C

    2008-01-01

    DIRAC, LHCb's Grid Workload and Data Management System, utilizes WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb's Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for these files in replica and bookkeeping catalogues, allowing dataset selection and localization. The DMS controls the movement of files in a redundant fashion whilst providing utilities for accessing all metadata. To do these tasks effectively the DMS requires complete self integrity between its components and external physical storage. The DMS provides highly redundant management of all LHCb data to leverage available storage resources and to manage transient errors in underlying services. It provides data driven and reliable distribution of files as well as reliable job output upload, utilizing VO Boxes at LHCb Tier1 sites ...

  12. Reliability of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  13. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2018-01-01

    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  14. Some areas of reliability technique which have been neglected to some extent - maintainability - human reliability - mechanical reliability - repairable systems

    International Nuclear Information System (INIS)

    Akersten, P.A.

    1985-01-01

    The present thesis consists of four papers, three of which are of a expositary nature and one more theoretical. The first two papers have a natural coupling to the man-machine interface. The first paper is devoted to the concept of maintainability and the role of man as maintenance technician. The second paper discusses aspects of human reliability, mainly studying man as operator. However, maintenance tasks can be studied in the same manner. The third paper concerns reliability prediction for mechanical components. This is an area of vital importance for the reliability practitioner, who needs realistic and easy-to-use mathematical models for different failure modes. The fourth paper discusses mathematical models for repairable systems, especially the problem of testing whether a constant event intensity model is adequate or not. (author)

  15. A holistic framework of degradation modeling for reliability analysis and maintenance optimization of nuclear safety systems

    International Nuclear Information System (INIS)

    Lin, Yanhui

    2016-01-01

    Components of nuclear safety systems are in general highly reliable, which leads to a difficulty in modeling their degradation and failure behaviors due to the limited amount of data available. Besides, the complexity of such modeling task is increased by the fact that these systems are often subject to multiple competing degradation processes and that these can be dependent under certain circumstances, and influenced by a number of external factors (e.g. temperature, stress, mechanical shocks, etc.). In this complicated problem setting, this PhD work aims to develop a holistic framework of models and computational methods for the reliability-based analysis and maintenance optimization of nuclear safety systems taking into account the available knowledge on the systems, degradation and failure behaviors, their dependencies, the external influencing factors and the associated uncertainties.The original scientific contributions of the work are: (1) For single components, we integrate random shocks into multi-state physics models for component reliability analysis, considering general dependencies between the degradation and two types of random shocks. (2) For multi-component systems (with a limited number of components):(a) a piecewise-deterministic Markov process modeling framework is developed to treat degradation dependency in a system whose degradation processes are modeled by physics-based models and multi-state models; (b) epistemic uncertainty due to incomplete or imprecise knowledge is considered and a finite-volume scheme is extended to assess the (fuzzy) system reliability; (c) the mean absolute deviation importance measures are extended for components with multiple dependent competing degradation processes and subject to maintenance; (d) the optimal maintenance policy considering epistemic uncertainty and degradation dependency is derived by combining finite-volume scheme, differential evolution and non-dominated sorting differential evolution; (e) the

  16. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    Science.gov (United States)

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  17. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  18. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  19. EDF/EPRI collaborative program on operator reliability experiments

    International Nuclear Information System (INIS)

    Villemeur, A.; Meslin, T.; Mosneron, F.; Worledge, D.H.; Joksimovich, V.; Spurgin, A.J.

    1988-01-01

    Electricite de France (EDF) and Electric Power Research Institute (EPRI) have been involved in human reliability studies over the last few years, in the context of improvements in human reliability assessment (HRA) methodologies, and have been following a systematic process since 1982 which consists of addressing the following five ingredients: - First, classify human interactions into a limited number of classes. - Second, introduce an acceptable framework to organize the application of HRA to PRA studies. - Third, select approach(es) to quantification. - Fourth, test promising models. - Fifth, establish an appropriate data base for tested model(s) with regard to specific applications. EPRI has just recently completed Phase I of the fourth topic. This primarily focused on testing the fundamental hypotheses behing the human cognitive reliability (HCR) correlation, using power plant simulators. EDF has been carrying out simulator studies since 1980, both for man-machine interface validation and HRA data collection. This background of experience provided a stepping stone for the EPRI project. On the other hand, before 1986, EDF had mainly been concentrating on getting qualitative insights from the tests and lacked experience in quantitative analysis and modeling, while EPRI had made advances in this latter area. Before the EPRI Operator Reliability Experiments (ORE) project was initiated, it was abundantly clear to EPRI and EDF that cooperation between the two could be useful and that both parties could gain from the cooperation

  20. Human reliability data, human error and accident models--illustration through the Three Mile Island accident analysis

    International Nuclear Information System (INIS)

    Le Bot, Pierre

    2004-01-01

    Our first objective is to provide a panorama of Human Reliability data used in EDF's Safety Probabilistic Studies, and then, since these concepts are at the heart of Human Reliability and its methods, to go over the notion of human error and the understanding of accidents. We are not sure today that it is actually possible to provide in this field a foolproof and productive theoretical framework. Consequently, the aim of this article is to suggest potential paths of action and to provide information on EDF's progress along those paths which enables us to produce the most potentially useful Human Reliability analyses while taking into account current knowledge in Human Sciences. The second part of this article illustrates our point of view as EDF researchers through the analysis of the most famous civil nuclear accident, the Three Mile Island unit accident in 1979. Analysis of this accident allowed us to validate our positions regarding the need to move, in the case of an accident, from the concept of human error to that of systemic failure in the operation of systems such as a nuclear power plant. These concepts rely heavily on the notion of distributed cognition and we will explain how we applied it. These concepts were implemented in the MERMOS Human Reliability Probabilistic Assessment methods used in the latest EDF Probabilistic Human Reliability Assessment. Besides the fact that it is not very productive to focus exclusively on individual psychological error, the design of the MERMOS method and its implementation have confirmed two things: the significance of qualitative data collection for Human Reliability, and the central role held by Human Reliability experts in building knowledge about emergency operation, which in effect consists of Human Reliability data collection. The latest conclusion derived from the implementation of MERMOS is that, considering the difficulty in building 'generic' Human Reliability data in the field we are involved in, the best

  1. Reliability estimation of semi-Markov systems: a case study

    International Nuclear Information System (INIS)

    Ouhbi, Brahim; Limnios, Nikolaos

    1997-01-01

    In this article, we are concerned with the estimation of the reliability and the availability of a turbo-generator rotor using a set of data observed in a real engineering situation provided by Electricite De France (EDF). The rotor is modeled by a semi-Markov process, which is used to estimate the rotor's reliability and availability. To do this, we present a method for estimating the semi-Markov kernel from a censored data

  2. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  3. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  4. Analysis of Parking Reliability Guidance of Urban Parking Variable Message Sign System

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2012-01-01

    Full Text Available Operators of parking guidance and information systems (PGIS often encounter difficulty in determining when and how to provide reliable car park availability information to drivers. Reliability has become a key factor to ensure the benefits of urban PGIS. The present paper is the first to define the guiding parking reliability of urban parking variable message signs (VMSs. By analyzing the parking choice under guiding and optional parking lots, a guiding parking reliability model was constructed. A mathematical program was formulated to determine the guiding parking reliability of VMS. The procedures were applied to a numerical example, and the factors that affect guiding reliability were analyzed. The quantitative changes of the parking berths and the display conditions of VMS were found to be the most important factors influencing guiding reliability. The parking guiding VMS achieved the best benefit when the parking supply was close to or was less than the demand. The combination of a guiding parking reliability model and parking choice behavior offers potential for PGIS operators to reduce traffic congestion in central city areas.

  5. Reliability of Current Biokinetic and Dosimetric Models for Radionuclides: A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Leggett, Richard Wayne [ORNL; Eckerman, Keith F [ORNL; Meck, Robert A. [U.S. Nuclear Regulatory Commission

    2008-10-01

    This report describes the results of a pilot study of the reliability of the biokinetic and dosimetric models currently used by the U.S. Nuclear Regulatory Commission (NRC) as predictors of dose per unit internal or external exposure to radionuclides. The study examines the feasibility of critically evaluating the accuracy of these models for a comprehensive set of radionuclides of concern to the NRC. Each critical evaluation would include: identification of discrepancies between the models and current databases; characterization of uncertainties in model predictions of dose per unit intake or unit external exposure; characterization of variability in dose per unit intake or unit external exposure; and evaluation of prospects for development of more accurate models. Uncertainty refers here to the level of knowledge of a central value for a population, and variability refers to quantitative differences between different members of a population. This pilot study provides a critical assessment of models for selected radionuclides representing different levels of knowledge of dose per unit exposure. The main conclusions of this study are as follows: (1) To optimize the use of available NRC resources, the full study should focus on radionuclides most frequently encountered in the workplace or environment. A list of 50 radionuclides is proposed. (2) The reliability of a dose coefficient for inhalation or ingestion of a radionuclide (i.e., an estimate of dose per unit intake) may depend strongly on the specific application. Multiple characterizations of the uncertainty in a dose coefficient for inhalation or ingestion of a radionuclide may be needed for different forms of the radionuclide and different levels of information of that form available to the dose analyst. (3) A meaningful characterization of variability in dose per unit intake of a radionuclide requires detailed information on the biokinetics of the radionuclide and hence is not feasible for many infrequently

  6. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    Science.gov (United States)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  7. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

    Directory of Open Access Journals (Sweden)

    R. P. Gerber

    2013-03-01

    Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

  8. Root cause analysis in support of reliability enhancement of engineering components

    International Nuclear Information System (INIS)

    Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.

    2014-01-01

    Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

  9. Reliability assessment of embedded digital system using multi-state function

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    2006-01-01

    This work describes a combinatorial model for estimating the reliability of the embedded digital system by means of multi-state function. This model includes a coverage model for fault-handling techniques implemented in digital systems. The fault-handling techniques make it difficult for many types of components in digital system to be treated as binary state, good or bad. The multi-state function provides a complete analysis of multi-state systems as which the digital systems can be regarded. Through adaptation of software operational profile flow to multi-state function, the HW/SW interaction is also considered for estimation of the reliability of digital system. Using this model, we evaluate the reliability of one board controller in a digital system, Interposing Logic System (ILS), which is installed in YGN nuclear power units 3 and 4. Since the proposed model is a generalized combinatorial model, the simplification of this model becomes the conventional model that treats the system as binary state. This modeling method is particularly attractive for embedded systems in which small sized application software is implemented since it will require very laborious work for this method to be applied to systems with large software

  10. [Reliability study in the measurement of the cusp inclination angle of a chairside digital model].

    Science.gov (United States)

    Xinggang, Liu; Xiaoxian, Chen

    2018-02-01

    This study aims to evaluate the reliability of the software Picpick in the measurement of the cusp inclination angle of a digital model. Twenty-one trimmed models were used as experimental objects. The chairside digital impression was then used for the acquisition of 3D digital models, and the software Picpick was employed for the measurement of the cusp inclination of these models. The measurements were repeated three times, and the results were compared with a gold standard, which was a manually measured experimental model cusp angle. The intraclass correlation coefficient (ICC) was calculated. The paired t test value of the two measurement methods was 0.91. The ICCs between the two measurement methods and three repeated measurements were greater than 0.9. The digital model achieved a smaller coefficient of variation (9.9%). The software Picpick is reliable in measuring the cusp inclination of a digital model.

  11. Mission Reliability Estimation for Repairable Robot Teams

    Science.gov (United States)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  12. The reliability, accuracy and minimal detectable difference of a multi-segment kinematic model of the foot-shoe complex.

    Science.gov (United States)

    Bishop, Chris; Paul, Gunther; Thewlis, Dominic

    2013-04-01

    Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot-shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot-shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC=0.75-0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC=0.68-0.99) than the inexperienced rater (ICC=0.38-0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint--MDD90=2.17-9.36°, tarsometatarsal joint--MDD90=1.03-9.29° and the metatarsophalangeal joint--MDD90=1.75-9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2003-04-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out.

  14. Development of thermal hydraulic models for the reliable regulatory auditing code

    International Nuclear Information System (INIS)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.

    2003-04-01

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out

  15. Single versus mixture Weibull distributions for nonparametric satellite reliability

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Long recognized as a critical design attribute for space systems, satellite reliability has not yet received the proper attention as limited on-orbit failure data and statistical analyses can be found in the technical literature. To fill this gap, we recently conducted a nonparametric analysis of satellite reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we provide an advanced parametric fit, based on mixture of Weibull distributions, and compare it with the single Weibull distribution model obtained with the Maximum Likelihood Estimation (MLE) method. We demonstrate that both parametric fits are good approximations of the nonparametric satellite reliability, but that the mixture Weibull distribution provides significant accuracy in capturing all the failure trends in the failure data, as evidenced by the analysis of the residuals and their quasi-normal dispersion.

  16. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    Science.gov (United States)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  17. A rule induction approach to improve Monte Carlo system reliability assessment

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.

    2003-01-01

    A Decision Tree (DT) approach to build empirical models for use in Monte Carlo reliability evaluation is presented. The main idea is to develop an estimation algorithm, by training a model on a restricted data set, and replacing the Evaluation Function (EF) by a simpler calculation, which provides reasonably accurate model outputs. The proposed approach is illustrated with two systems of different size, represented by their equivalent networks. The robustness of the DT approach as an approximated method to replace the EF is also analysed. Excellent system reliability results are obtained by training a DT with a small amount of information

  18. Weibull distribution in reliability data analysis in nuclear power plant

    International Nuclear Information System (INIS)

    Ma Yingfei; Zhang Zhijian; Zhang Min; Zheng Gangyang

    2015-01-01

    Reliability is an important issue affecting each stage of the life cycle ranging from birth to death of a product or a system. The reliability engineering includes the equipment failure data processing, quantitative assessment of system reliability and maintenance, etc. Reliability data refers to the variety of data that describe the reliability of system or component during its operation. These data may be in the form of numbers, graphics, symbols, texts and curves. Quantitative reliability assessment is the task of the reliability data analysis. It provides the information related to preventing, detect, and correct the defects of the reliability design. Reliability data analysis under proceed with the various stages of product life cycle and reliability activities. Reliability data of Systems Structures and Components (SSCs) in Nuclear Power Plants is the key factor of probabilistic safety assessment (PSA); reliability centered maintenance and life cycle management. The Weibull distribution is widely used in reliability engineering, failure analysis, industrial engineering to represent manufacturing and delivery times. It is commonly used to model time to fail, time to repair and material strength. In this paper, an improved Weibull distribution is introduced to analyze the reliability data of the SSCs in Nuclear Power Plants. An example is given in the paper to present the result of the new method. The Weibull distribution of mechanical equipment for reliability data fitting ability is very strong in nuclear power plant. It's a widely used mathematical model for reliability analysis. The current commonly used methods are two-parameter and three-parameter Weibull distribution. Through comparison and analysis, the three-parameter Weibull distribution fits the data better. It can reflect the reliability characteristics of the equipment and it is more realistic to the actual situation. (author)

  19. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    Science.gov (United States)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

  20. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  1. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  2. Using Evidence Credibility Decay Model for dependence assessment in human reliability analysis

    International Nuclear Information System (INIS)

    Guo, Xingfeng; Zhou, Yanhui; Qian, Jin; Deng, Yong

    2017-01-01

    Highlights: • A new computational model is proposed for dependence assessment in HRA. • We combined three factors of “CT”, “TR” and “SP” within Dempster–Shafer theory. • The BBA of “SP” is reconstructed by discounting rate based on the ECDM. • Simulation experiments are illustrated to show the efficiency of the proposed method. - Abstract: Dependence assessment among human errors plays an important role in human reliability analysis. When dependence between two sequent tasks exists in human reliability analysis, if the preceding task fails, the failure probability of the following task is higher than success. Typically, three major factors are considered: “Closeness in Time” (CT), “Task Relatedness” (TR) and “Similarity of Performers” (SP). Assume TR is not changed, both SP and CT influence the degree of dependence level and SP is discounted by the time as the result of combine two factors in this paper. In this paper, a new computational model is proposed based on the Dempster–Shafer Evidence Theory (DSET) and Evidence Credibility Decay Model (ECDM) to assess the dependence between tasks in human reliability analysis. First, the influenced factors among human tasks are identified and the basic belief assignments (BBAs) of each factor are constructed based on expert evaluation. Then, the BBA of SP is discounted as the result of combining two factors and reconstructed by using the ECDM, the factors are integrated into a fused BBA. Finally, the dependence level is calculated based on fused BBA. Experimental results demonstrate that the proposed model not only quantitatively describe the fact that the input factors influence the dependence level, but also exactly show how the dependence level regular changes with different situations of input factors.

  3. Reliability of Power Electronic Converter Systems

    DEFF Research Database (Denmark)

    -link capacitance in power electronic converter systems; wind turbine systems; smart control strategies for improved reliability of power electronics system; lifetime modelling; power module lifetime test and state monitoring; tools for performance and reliability analysis of power electronics systems; fault...... for advancing the reliability, availability, system robustness, and maintainability of PECS at different levels of complexity. Drawing on the experience of an international team of experts, this book explores the reliability of PECS covering topics including an introduction to reliability engineering in power...... electronic converter systems; anomaly detection and remaining-life prediction for power electronics; reliability of DC-link capacitors in power electronic converters; reliability of power electronics packaging; modeling for life-time prediction of power semiconductor modules; minimization of DC...

  4. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Science.gov (United States)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  5. Testing comparison models of DASS-12 and its reliability among adolescents in Malaysia.

    Science.gov (United States)

    Osman, Zubaidah Jamil; Mukhtar, Firdaus; Hashim, Hairul Anuar; Abdul Latiff, Latiffah; Mohd Sidik, Sherina; Awang, Hamidin; Ibrahim, Normala; Abdul Rahman, Hejar; Ismail, Siti Irma Fadhilah; Ibrahim, Faisal; Tajik, Esra; Othman, Norlijah

    2014-10-01

    The 21-item Depression, Anxiety and Stress Scale (DASS-21) is frequently used in non-clinical research to measure mental health factors among adults. However, previous studies have concluded that the 21 items are not stable for utilization among the adolescent population. Thus, the aims of this study are to examine the structure of the factors and to report on the reliability of the refined version of the DASS that consists of 12 items. A total of 2850 students (aged 13 to 17 years old) from three major ethnic in Malaysia completed the DASS-21. The study was conducted at 10 randomly selected secondary schools in the northern state of Peninsular Malaysia. The study population comprised secondary school students (Forms 1, 2 and 4) from the selected schools. Based on the results of the EFA stage, 12 items were included in a final CFA to test the fit of the model. Using maximum likelihood procedures to estimate the model, the selected fit indices indicated a close model fit (χ(2)=132.94, df=57, p=.000; CFI=.96; RMR=.02; RMSEA=.04). Moreover, significant loadings of all the unstandardized regression weights implied an acceptable convergent validity. Besides the convergent validity of the item, a discriminant validity of the subscales was also evident from the moderate latent factor inter-correlations, which ranged from .62 to .75. The subscale reliability was further estimated using Cronbach's alpha and the adequate reliability of the subscales was obtained (Total=76; Depression=.68; Anxiety=.53; Stress=.52). The new version of the 12-item DASS for adolescents in Malaysia (DASS-12) is reliable and has a stable factor structure, and thus it is a useful instrument for distinguishing between depression, anxiety and stress. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  7. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  8. Reliability of Degree-Day Models to Predict the Development Time of Plutella xylostella (L.) under Field Conditions.

    Science.gov (United States)

    Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A

    2015-12-01

    The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.

  9. Approach to reliability assessment

    International Nuclear Information System (INIS)

    Green, A.E.; Bourne, A.J.

    1975-01-01

    Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions

  10. Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology

    International Nuclear Information System (INIS)

    Knezevic, J.; Odoom, E.R.

    2001-01-01

    A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda-Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets

  11. Performance reliability prediction for thermal aging based on kalman filtering

    International Nuclear Information System (INIS)

    Ren Shuhong; Wen Zhenhua; Xue Fei; Zhao Wensheng

    2015-01-01

    The performance reliability of the nuclear power plant main pipeline that failed due to thermal aging was studied by the performance degradation theory. Firstly, through the data obtained from the accelerated thermal aging experiments, the degradation process of the impact strength and fracture toughness of austenitic stainless steel material of the main pipeline was analyzed. The time-varying performance degradation model based on the state space method was built, and the performance trends were predicted by using Kalman filtering. Then, the multi-parameter and real-time performance reliability prediction model for the main pipeline thermal aging was developed by considering the correlation between the impact properties and fracture toughness, and by using the stochastic process theory. Thus, the thermal aging performance reliability and reliability life of the main pipeline with multi-parameter were obtained, which provides the scientific basis for the optimization management of the aging maintenance decision making for nuclear power plant main pipelines. (authors)

  12. A Combined Reliability Model of VSC-HVDC Connected Offshore Wind Farms Considering Wind Speed Correlation

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    and WTGs outage. The wind speed correlation between different WFs is included in the two-dimensional multistate WF model by using an improved k-means clustering method. Then, the entire system with two WFs and a threeterminal VSC-HVDC system is modeled as a multi-state generation unit. The proposed model...... is applied to the Roy Billinton test system (RBTS) for adequacy studies. Both the probability and frequency indices are calculated. The effectiveness and accuracy of the combined model is validated by comparing results with the sequential Monte Carlo simulation (MCS) method. The effects of the outage of VSC-HVDC...... system and wind speed correlation on the system reliability were analyzed. Sensitivity analyses were conducted to investigate the impact of repair time of the offshore VSC-HVDC system on system reliability....

  13. User's guide to the Reliability Estimation System Testbed (REST)

    Science.gov (United States)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  14. Monte Carlo simulation based reliability evaluation in a multi-bilateral contracts market

    International Nuclear Information System (INIS)

    Goel, L.; Viswanath, P.A.; Wang, P.

    2004-01-01

    This paper presents a time sequential Monte Carlo simulation technique to evaluate customer load point reliability in multi-bilateral contracts market. The effects of bilateral transactions, reserve agreements, and the priority commitments of generating companies on customer load point reliability have been investigated. A generating company with bilateral contracts is modelled as an equivalent time varying multi-state generation (ETMG). A procedure to determine load point reliability based on ETMG has been developed. The developed procedure is applied to a reliability test system to illustrate the technique. Representing each bilateral contract by an ETMG provides flexibility in determining the reliability at various customer load points. (authors)

  15. The reliability of the Hendrich Fall Risk Model in a geriatric hospital.

    Science.gov (United States)

    Heinze, Cornelia; Halfens, Ruud; Dassen, Theo

    2008-12-01

    Aims and objectives.  The purpose of this study was to test the interrater reliability of the Hendrich Fall Risk Model, an instrument to identify patients in a hospital setting with a high risk of falling. Background.  Falls are a serious problem in older patients. Valid and reliable fall risk assessment tools are required to identify high-risk patients and to take adequate preventive measures. Methods.  Seventy older patients were independently and simultaneously assessed by six pairs of raters made up of nursing staff members. Consensus estimates were calculated using simple percentage agreement and consistency estimates using Spearman's rho and intra class coefficient. Results.  Percentage agreement ranged from 0.70 to 0.92 between the six pairs of raters. Spearman's rho coefficients were between 0.54 and 0.80 and the intra class coefficients were between 0.46 and 0.92. Conclusions.  Whereas some pairs of raters obtained considerable interobserver agreement and internal consistency, the others did not. Therefore, it is concluded that the Hendrich Fall Risk Model is not a reliable instrument. The use of more unambiguous operationalized items is preferred. Relevance to clinical practice.  In practice, well operationalized fall risk assessment tools are necessary. Observer agreement should always be investigated after introducing a standardized measurement tool. © 2008 The Authors. Journal compilation © 2008 Blackwell Publishing Ltd.

  16. A quantitative approach to wind farm diversification and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Degeilh, Yannick; Singh, Chanan [Department of Electrical and Computer Engineering, Texas A and M University, College Station, TX 77843 (United States)

    2011-02-15

    This paper proposes a general planning method to minimize the variance of aggregated wind farm power output by optimally distributing a predetermined number of wind turbines over a preselected number of potential wind farming sites. The objective is to facilitate high wind power penetration through the search for steadier overall power output. Another optimization formulation that takes into account the correlations between wind power outputs and load is also presented. Three years of wind data from the recent NREL/3TIER study in the western US provides the statistics for evaluating each site upon their mean power output, variance and correlation with each other so that the best allocations can be determined. The reliability study reported in this paper investigates the impact of wind power output variance reduction on a power system composed of a virtual wind power plant and a load modeled from the 1996 IEEE RTS. Some traditional reliability indices such as the LOLP are calculated and it is eventually shown that configurations featuring minimal global power output variances generally prove the most reliable provided the sites are not significantly correlated with the modeled load. Consequently, the choice of uncorrelated/negatively correlated sites is favored. (author)

  17. A quantitative approach to wind farm diversification and reliability

    International Nuclear Information System (INIS)

    Degeilh, Yannick; Singh, Chanan

    2011-01-01

    This paper proposes a general planning method to minimize the variance of aggregated wind farm power output by optimally distributing a predetermined number of wind turbines over a preselected number of potential wind farming sites. The objective is to facilitate high wind power penetration through the search for steadier overall power output. Another optimization formulation that takes into account the correlations between wind power outputs and load is also presented. Three years of wind data from the recent NREL/3TIER study in the western US provides the statistics for evaluating each site upon their mean power output, variance and correlation with each other so that the best allocations can be determined. The reliability study reported in this paper investigates the impact of wind power output variance reduction on a power system composed of a virtual wind power plant and a load modeled from the 1996 IEEE RTS. Some traditional reliability indices such as the LOLP are calculated and it is eventually shown that configurations featuring minimal global power output variances generally prove the most reliable provided the sites are not significantly correlated with the modeled load. Consequently, the choice of uncorrelated/negatively correlated sites is favored. (author)

  18. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  19. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    Energy Technology Data Exchange (ETDEWEB)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  20. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Science.gov (United States)

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Design methodologies for reliability of SSL LED boards

    NARCIS (Netherlands)

    Jakovenko, J.; Formánek, J.; Perpiñà, X.; Jorda, X.; Vellvehi, M.; Werkhoven, R.J.; Husák, M.; Kunen, J.M.G.; Bancken, P.; Bolt, P.J.; Gasse, A.

    2013-01-01

    This work presents a comparison of various LED board technologies from thermal, mechanical and reliability point of view provided by an accurate 3-D modelling. LED boards are proposed as a possible technology replacement of FR4 LED boards used in 400 lumen retrofit SSL lamps. Presented design

  2. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    Science.gov (United States)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  3. On Bayesian System Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen Ringi, M

    1995-05-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  4. On Bayesian System Reliability Analysis

    International Nuclear Information System (INIS)

    Soerensen Ringi, M.

    1995-01-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs

  5. Comparative analysis among deterministic and stochastic collision damage models for oil tanker and bulk carrier reliability

    Directory of Open Access Journals (Sweden)

    A. Campanile

    2018-01-01

    Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.

  6. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    Science.gov (United States)

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  7. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    A number of software reliability models have been developed to estimate and to predict software reliability. However, there are no established standard models to quantify software reliability. Most models estimate the quality of software in reliability figures such as remaining faults, failure rate, or mean time to next failure at the testing phase, and they consider them ultimate indicators of software reliability. Experience shows that there is a large gap between predicted reliability during development and reliability measured during operation, which means that predicted reliability, or so-called test reliability, is not operational reliability. Customers prefer operational reliability to test reliability. In this study, we propose a method that predicts operational reliability rather than test reliability by introducing the testing environment factor that quantifies the changes in environments

  8. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)

    2004-02-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  9. Improvement of level-1 PSA computer code package - Modeling and analysis for dynamic reliability of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)

    1996-08-01

    The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)

  10. Hot Spot Temperature and Grey Target Theory-Based Dynamic Modelling for Reliability Assessment of Transformer Oil-Paper Insulation Systems: A Practical Case Study

    Directory of Open Access Journals (Sweden)

    Lefeng Cheng

    2018-01-01

    Full Text Available This paper develops a novel dynamic correction method for the reliability assessment of large oil-immersed power transformers. First, with the transformer oil-paper insulation system (TOPIS as the target of evaluation and the winding hot spot temperature (HST as the core point, an HST-based static ageing failure model is built according to the Weibull distribution and Arrhenius reaction law, in order to describe the transformer ageing process and calculate the winding HST for obtaining the failure rate and life expectancy of TOPIS. A grey target theory based dynamic correction model is then developed, combined with the data of Dissolved Gas Analysis (DGA in power transformer oil, in order to dynamically modify the life expectancy calculated by the built static model, such that the corresponding relationship between the state grade and life expectancy correction coefficient of TOPIS can be built. Furthermore, the life expectancy loss recovery factor is introduced to correct the life expectancy of TOPIS again. Lastly, a practical case study of an operating transformer has been undertaken, in which the failure rate curve after introducing dynamic corrections can be obtained for the reliability assessment of this transformer. The curve shows a better ability of tracking the actual reliability level of transformer, thus verifying the validity of the proposed method and providing a new way for transformer reliability assessment. This contribution presents a novel model for the reliability assessment of TOPIS, in which the DGA data, as a source of information for the dynamic correction, is processed based on the grey target theory, thus the internal faults of power transformer can be diagnosed accurately as well as its life expectancy updated in time, ensuring that the dynamic assessment values can commendably track and reflect the actual operation state of the power transformers.

  11. A Dialogue about MCQs, Reliability, and Item Response Modelling

    Science.gov (United States)

    Wright, Daniel B.; Skagerberg, Elin M.

    2006-01-01

    Multiple choice questions (MCQs) are becoming more common in UK psychology departments and the need to assess their reliability is apparent. Having examined the reliability of MCQs in our department we faced many questions from colleagues about why we were examining reliability, what it was that we were doing, and what should be reported when…

  12. Approach for an integral power transformer reliability model

    NARCIS (Netherlands)

    Schijndel, van A.; Wouters, P.A.A.F.; Steennis, E.F.; Wetzer, J.M.

    2012-01-01

    In electrical power transmission and distribution networks power transformers represent a crucial group of assets both in terms of reliability and investments. In order to safeguard the required quality at acceptable costs, decisions must be based on a reliable forecast of future behaviour. The aim

  13. Providing Reliability Services through Demand Response: A Prelimnary Evaluation of the Demand Response Capabilities of Alcoa Inc.

    Energy Technology Data Exchange (ETDEWEB)

    Starke, Michael R [ORNL; Kirby, Brendan J [ORNL; Kueck, John D [ORNL; Todd, Duane [Alcoa; Caulfield, Michael [Alcoa; Helms, Brian [Alcoa

    2009-02-01

    Demand response is the largest underutilized reliability resource in North America. Historic demand response programs have focused on reducing overall electricity consumption (increasing efficiency) and shaving peaks but have not typically been used for immediate reliability response. Many of these programs have been successful but demand response remains a limited resource. The Federal Energy Regulatory Commission (FERC) report, 'Assessment of Demand Response and Advanced Metering' (FERC 2006) found that only five percent of customers are on some form of demand response program. Collectively they represent an estimated 37,000 MW of response potential. These programs reduce overall energy consumption, lower green house gas emissions by allowing fossil fuel generators to operate at increased efficiency and reduce stress on the power system during periods of peak loading. As the country continues to restructure energy markets with sophisticated marginal cost models that attempt to minimize total energy costs, the ability of demand response to create meaningful shifts in the supply and demand equations is critical to creating a sustainable and balanced economic response to energy issues. Restructured energy market prices are set by the cost of the next incremental unit of energy, so that as additional generation is brought into the market, the cost for the entire market increases. The benefit of demand response is that it reduces overall demand and shifts the entire market to a lower pricing level. This can be very effective in mitigating price volatility or scarcity pricing as the power system responds to changing demand schedules, loss of large generators, or loss of transmission. As a global producer of alumina, primary aluminum, and fabricated aluminum products, Alcoa Inc., has the capability to provide demand response services through its manufacturing facilities and uniquely through its aluminum smelting facilities. For a typical aluminum smelter

  14. Optimizing multiple reliable forward contracts for reservoir allocation using multitime scale streamflow forecasts

    Science.gov (United States)

    Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward

    2017-03-01

    Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.

  15. Reliability analysis of software based safety functions

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1993-05-01

    The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)

  16. Lead free solder mechanics and reliability

    CERN Document Server

    Pang, John Hock Lye

    2012-01-01

    Lead-free solders are used extensively as interconnection materials in electronic assemblies and play a critical role in the global semiconductor packaging and electronics manufacturing industry. Electronic products such as smart phones, notebooks and high performance computers rely on lead-free solder joints to connect IC chip components to printed circuit boards. Lead Free Solder: Mechanics and Reliability provides in-depth design knowledge on lead-free solder elastic-plastic-creep and strain-rate dependent deformation behavior and its application in failure assessment of solder joint reliability. It includes coverage of advanced mechanics of materials theory and experiments, mechanical properties of solder and solder joint specimens, constitutive models for solder deformation behavior; numerical modeling and simulation of solder joint failure subject to thermal cycling, mechanical bending fatigue, vibration fatigue and board-level drop impact tests. This book also: Discusses the mechanical prope...

  17. Structural Reliability Methods for Wind Power Converter System Component Reliability Assessment

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Wind power converter systems are essential subsystems in both off-shore and on-shore wind turbines. It is the main interface between generator and grid connection. This system is affected by numerous stresses where the main contributors might be defined as vibration and temperature loadings....... The temperature variations induce time-varying stresses and thereby fatigue loads. A probabilistic model is used to model fatigue failure for an electrical component in the power converter system. This model is based on a linear damage accumulation and physics of failure approaches, where a failure criterion...... is defined by the threshold model. The attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Structural Reliability approaches are used to incorporate model, physical and statistical uncertainties. Reliability estimation by means of structural...

  18. Reliability Assessment of Solder Joints in Power Electronic Modules by Crack Damage Model for Wind Turbine Applications

    Directory of Open Access Journals (Sweden)

    John D. Sørensen

    2011-12-01

    Full Text Available Wind turbine reliability is an important issue for wind energy cost minimization, especially by reduction of operation and maintenance costs for critical components and by increasing wind turbine availability. To develop an optimal operation and maintenance plan for critical components, it is necessary to understand the physics of their failure and be able to develop reliability prediction models. Such a model is proposed in this paper for an IGBT power electronic module. IGBTs are critical components in wind turbine converter systems. These are multilayered devices where layers are soldered to each other and they operate at a thermal-power cycling environment. Temperature loadings affect the reliability of soldered joints by developing cracks and fatigue processes that eventually result in failure. Based on Miner’s rule a linear damage model that incorporates a crack development and propagation processes is discussed. A statistical analysis is performed for appropriate model parameter selection. Based on the proposed model, a layout for component life prediction with crack movement is described in details.

  19. Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis

    Science.gov (United States)

    Montgomery, Todd L.

    1995-01-01

    This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.

  20. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    Science.gov (United States)

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  1. Design reliability engineering

    International Nuclear Information System (INIS)

    Buden, D.; Hunt, R.N.M.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig

  2. Uncertainties and reliability theories for reactor safety

    International Nuclear Information System (INIS)

    Veneziano, D.

    1975-01-01

    What makes the safety problem of nuclear reactors particularly challenging is the demand for high levels of reliability and the limitation of statistical information. The latter is an unfortunate circumstance, which forces deductive theories of reliability to use models and parameter values with weak factual support. The uncertainty about probabilistic models and parameters which are inferred from limited statistical evidence can be quantified and incorporated rationally into inductive theories of reliability. In such theories, the starting point is the information actually available, as opposed to an estimated probabilistic model. But, while the necessity of introducing inductive uncertainty into reliability theories has been recognized by many authors, no satisfactory inductive theory is presently available. The paper presents: a classification of uncertainties and of reliability models for reactor safety; a general methodology to include these uncertainties into reliability analysis; a discussion about the relative advantages and the limitations of various reliability theories (specifically, of inductive and deductive, parametric and nonparametric, second-moment and full-distribution theories). For example, it is shown that second-moment theories, which were originally suggested to cope with the scarcity of data, and which have been proposed recently for the safety analysis of secondary containment vessels, are the least capable of incorporating statistical uncertainty. The focus is on reliability models for external threats (seismic accelerations and tornadoes). As an application example, the effect of statistical uncertainty on seismic risk is studied using parametric full-distribution models

  3. Reliability analysis of operator's monitoring behavior in digital main control room of nuclear power plants and its application

    International Nuclear Information System (INIS)

    Zhang Li; Hu Hong; Li Pengcheng; Jiang Jianjun; Yi Cannan; Chen Qingqing

    2015-01-01

    In order to build a quantitative model to analyze operators' monitoring behavior reliability of digital main control room of nuclear power plants, based on the analysis of the design characteristics of digital main control room of a nuclear power plant and operator's monitoring behavior, and combining with operators' monitoring behavior process, monitoring behavior reliability was divided into three parts including information transfer reliability among screens, inside-screen information sampling reliability and information detection reliability. Quantitative calculation model of information transfer reliability among screens was established based on Senders's monitoring theory; the inside screen information sampling reliability model was established based on the allocation theory of attention resources; and considering the performance shaping factor causality, a fuzzy Bayesian method was presented to quantify information detection reliability and an example of application was given. The results show that the established model of monitoring behavior reliability gives an objective description for monitoring process, which can quantify the monitoring reliability and overcome the shortcomings of traditional methods. Therefore, it provides theoretical support for operator's monitoring behavior reliability analysis in digital main control room of nuclear power plants and improves the precision of human reliability analysis. (authors)

  4. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  5. 75 FR 71613 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits

    Science.gov (United States)

    2010-11-24

    ... Interconnection to relieve overloads on the facilities modeled in the Interchange Distribution Calculator (IDC... for other SOLs. But the Functional Model assigns a much broader role to the reliability coordinator to...

  6. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  7. Electronics reliability calculation and design

    CERN Document Server

    Dummer, Geoffrey W A; Hiller, N

    1966-01-01

    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  8. High level issues in reliability quantification of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2012-01-01

    For the purpose of developing a consensus method for the reliability assessment of safety-critical digital instrumentation and control systems in nuclear power plants, several high level issues in reliability assessment of the safety-critical software based on Bayesian belief network modeling and statistical testing are discussed. Related to the Bayesian belief network modeling, the relation between the assessment approach and the sources of evidence, the relation between qualitative evidence and quantitative evidence, how to consider qualitative evidence, and the cause-consequence relation are discussed. Related to the statistical testing, the need of the consideration of context-specific software failure probabilities and the inability to perform a huge number of tests in the real world are discussed. The discussions in this paper are expected to provide a common basis for future discussions on the reliability assessment of safety-critical software. (author)

  9. Quantitative characterization of the reliability of simplex buses and stars to compare their benefits in fieldbuses

    International Nuclear Information System (INIS)

    Barranco, Manuel; Proenza, Julián; Almeida, Luís

    2015-01-01

    Fieldbuses targeted to highly dependable distributed embedded systems are shifting from bus to star topologies. Surprisingly, despite the efforts into this direction, engineers lack of analyses that quantitatively characterize the system reliability achievable by buses and stars. Thus, to guide engineers in developing adequate bus and star fieldbuses, this work models, quantifies and compares the system reliability provided by simplex buses and stars for the case of the Controller Area Network (CAN). It clarifies how relevant dependability-related aspects affect reliability, refuting some intuitive ideas, and revealing some previously unknown bus and star benefits. - Highlights: • SANs models that quantify the reliability of simplex buses/stars in fieldbuses. • Models cover system relevant dependability-related features abstracted in the literature. • Results refute intuitive ideas about buses and stars and show some unexpected effects. • Models and results can guide the design of reliable simplex bus/stars fieldbuses

  10. Two-terminal reliability analyses for a mobile ad hoc wireless network

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2007-01-01

    Reliability is one of the most important performance measures for emerging technologies. For these systems, shortcomings are often overlooked in early releases as the cutting edge technology overshadows a fragile design. Currently, the proliferation of the mobile ad hoc wireless networks (MAWN) is moving from cutting edge to commodity and thus, reliable performance will be expected. Generally, ad hoc networking is applied for the flexibility and mobility it provides. As a result, military and first responders employ this network scheme and the reliability of the network becomes paramount. To ensure reliability is achieved, one must first be able to analyze and calculate the reliability of the MAWN. This work describes the unique attributes of the MAWN and how the classical analysis of network reliability, where the network configuration is known a priori, can be adjusted to model and analyze this type of network. The methods developed acknowledge the dynamic and scalable nature of the MAWN along with its absence of infrastructure. Thus, the methods rely on a modeling approach that considers the probabilistic formation of different network configurations in a MAWN. Hence, this paper proposes reliability analysis methods that consider the effect of node mobility and the continuous changes in the network's connectivity

  11. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  12. Bayesian approach for the reliability assessment of corroded interdependent pipe networks

    International Nuclear Information System (INIS)

    Ait Mokhtar, El Hassene; Chateauneuf, Alaa; Laggoune, Radouane

    2016-01-01

    Pipelines under corrosion are subject to various environment conditions, and consequently it becomes difficult to build realistic corrosion models. In the present work, a Bayesian methodology is proposed to allow for updating the corrosion model parameters according to the evolution of environmental conditions. For reliability assessment of dependent structures, Bayesian networks are used to provide interesting qualitative and quantitative description of the information in the system. The qualitative contribution lies in the modeling of complex system, composed by dependent pipelines, as a Bayesian network. The quantitative one lies in the evaluation of the dependencies between pipelines by the use of a new method for the generation of conditional probability tables. The effectiveness of Bayesian updating is illustrated through an application where the new reliability of degraded (corroded) pipe networks is assessed. - Highlights: • A methodology for Bayesian network modeling of pipe networks is proposed. • Bayesian approach based on Metropolis - Hastings algorithm is conducted for corrosion model updating. • The reliability of corroded pipe network is assessed by considering the interdependencies between the pipelines.

  13. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Ionescu-Bujor, M.

    2008-01-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  14. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safety, D-76021 Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  15. Nuclear plant reliability data system. 1979 annual reports of cumulative system and component reliability

    International Nuclear Information System (INIS)

    1979-01-01

    The primary purposes of the information in these reports are the following: to provide operating statistics of safety-related systems within a unit which may be used to compare and evaluate reliability performance and to provide failure mode and failure rate statistics on components which may be used in failure mode effects analysis, fault hazard analysis, probabilistic reliability analysis, and so forth

  16. Some remarks on software reliability

    International Nuclear Information System (INIS)

    Gonzalez Hernando, J.; Sanchez Izquierdo, J.

    1978-01-01

    Trend in modern NPPCI is toward a broad use of programmable elements. Some aspects concerning present status of programmable digital systems reliability are reported. Basic differences between software and hardware concept require a specific approach in all the reliability topics concerning software systems. The software reliability theory was initialy developed upon hardware models analogies. At present this approach is changing and specific models are being developed. The growing use of programmable systems necessitates emphasizing the importance of more adequate regulatory requirements to include this technology in NPPCI. (author)

  17. Final Report: System Reliability Model for Solid-State Lighting (SSL) Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J. Lynn [RTI International, Research Triangle Park, NC (United States)

    2017-05-31

    The primary objectives of this project was to develop and validate reliability models and accelerated stress testing (AST) methodologies for predicting the lifetime of integrated SSL luminaires. This study examined the likely failure modes for SSL luminaires including abrupt failure, excessive lumen depreciation, unacceptable color shifts, and increased power consumption. Data on the relative distribution of these failure modes were acquired through extensive accelerated stress tests and combined with industry data and other source of information on LED lighting. This data was compiled and utilized to build models of the aging behavior of key luminaire optical and electrical components.

  18. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  19. Modeling Market Shares of Competing (e)Care Providers

    Science.gov (United States)

    van Ooteghem, Jan; Tesch, Tom; Verbrugge, Sofie; Ackaert, Ann; Colle, Didier; Pickavet, Mario; Demeester, Piet

    In order to address the increasing costs of providing care to the growing group of elderly, efficiency gains through eCare solutions seem an obvious solution. Unfortunately not many techno-economic business models to evaluate the return of these investments are available. The construction of a business case for care for the elderly as they move through different levels of dependency and the effect of introducing an eCare service, is the intended application of the model. The simulation model presented in this paper allows for modeling evolution of market shares of competing care providers. Four tiers are defined, based on the dependency level of the elderly, for which the market shares are determined. The model takes into account available capacity of the different care providers, in- and outflow distribution between tiers and churn between providers within tiers.

  20. How many neurologists/epileptologists are needed to provide reliable descriptions of seizure types?

    NARCIS (Netherlands)

    van Ast, J. F.; Talmon, J. L.; Renier, W. O.; Hasman, A.

    2003-01-01

    We are developing seizure descriptions as a basis for decision support. Based on an existing dataset we used the Spearman-Brown prophecy formula to estimate how many neurologist/epileptologists are needed to obtain reliable seizure descriptions (rho = 0.9). By extending the number of participants to

  1. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  2. Reliability of a Seven-Segment Foot Model with Medial and Lateral Midfoot and Forefoot Segments During Walking Gait.

    Science.gov (United States)

    Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L

    2016-12-01

    In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.

  3. Human factors reliability benchmark exercise, report of the SRD participation

    International Nuclear Information System (INIS)

    Waters, Trevor

    1988-01-01

    Within the scope of the Human Factors Reliability Benchmark Exercise, organised by the Joint Research Centre, Ispra, Italy, the Safety and Reliability Directorate (SRD) team has performed analysis of human factors in two different activities - a routine test and a non-routine operational transient. For both activities, an 'FMEA-like' task, potential errors, and the factors which affect performance. For analysis of the non-routine activity, which involved a significant amount of cognitive processing, such as diagnosis and decision making, a new approach for qualitative analysis has been developed. Modelling has been performed using both event trees and fault trees and examples are provided. Human error probabilities were estimated using the methods Absolute Probability Judgement (APJ), Human Cognitive Reliability Method (HCR), Human Error and Assessment and Reduction Technique (HEART), Success-Likelihood Index Method (SLIM), Technica Empiriza Stima Eurori Operatori (TESEO), and Technique for Human Error Rate Prediction (THERP). A discussion is provided of the lessons learnt in the course of the exercise and unresolved difficulties in the assessment of human reliability. (author)

  4. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  5. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    International Nuclear Information System (INIS)

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-01-01

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  6. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  7. A mid-layer model for human reliability analysis: understanding the cognitive causes of human failure events

    International Nuclear Information System (INIS)

    Shen, Song-Hua; Chang, James Y.H.; Boring, Ronald L.; Whaley, April M.; Lois, Erasmia; Langfitt Hendrickson, Stacey M.; Oxstrand, Johanna H.; Forester, John Alan; Kelly, Dana L.; Mosleh, Ali

    2010-01-01

    The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.

  8. A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring; James Y. H. Chang; Song-Hua Shen; Ali Mosleh; Johanna H. Oxstrand; John A. Forester; Dana L. Kelly; Erasmia L. Lois

    2010-06-01

    The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.

  9. Optimization of reliability centered predictive maintenance scheme for inertial navigation system

    International Nuclear Information System (INIS)

    Jiang, Xiuhong; Duan, Fuhai; Tian, Heng; Wei, Xuedong

    2015-01-01

    The goal of this study is to propose a reliability centered predictive maintenance scheme for a complex structure Inertial Navigation System (INS) with several redundant components. GO Methodology is applied to build the INS reliability analysis model—GO chart. Components Remaining Useful Life (RUL) and system reliability are updated dynamically based on the combination of components lifetime distribution function, stress samples, and the system GO chart. Considering the redundant design in INS, maintenance time is based not only on components RUL, but also (and mainly) on the timing of when system reliability fails to meet the set threshold. The definition of components maintenance priority balances three factors: components importance to system, risk degree, and detection difficulty. Maintenance Priority Number (MPN) is introduced, which may provide quantitative maintenance priority results for all components. A maintenance unit time cost model is built based on components MPN, components RUL predictive model and maintenance intervals for the optimization of maintenance scope. The proposed scheme can be applied to serve as the reference for INS maintenance. Finally, three numerical examples prove the proposed predictive maintenance scheme is feasible and effective. - Highlights: • A dynamic PdM with a rolling horizon is proposed for INS with redundant components. • GO Methodology is applied to build the system reliability analysis model. • A concept of MPN is proposed to quantify the maintenance sequence of components. • An optimization model is built to select the optimal group of maintenance components. • The optimization goal is minimizing the cost of maintaining system reliability

  10. Reliability of supply in competitive electricity markets: The Nordic electricity Market

    International Nuclear Information System (INIS)

    Singh, Balbir

    2005-12-01

    An overview of the current regulation and performance of the network utilities with respect to the reliability of supply across Europe in general indicates wide variation. On the regional level the situation in the Nordic market is no exception. Can the variation in reliability of supply in Nordic region be explained by differences in regulatory frameworks in the Nordic countries and is it possible to draw any best practice lessons for other countries and regions? The Norwegian regulation and performance with respect to reliability criterion is encouraging, however it must be emphasized that the Norwegian experience with reliability regulation in its current form covers a period of 3 years, a period that is too short to evaluate the Norwegian model. A closer examination of the Norwegian model reveals dynamic trade off in reliability performance, which if permanent may endanger the reliability of supply in the long-run. Last, but not the least important is the criterion that choice of regulation should be based on a careful social cost-benefit analysis of the regulatory model where both cost incurred by the regulatory agencies and the compliance costs incurred by the regulated utilities are included. A preliminary analysis of regulatory agencies in the Nordic market indicates that Norwegian model of network regulation is quite resource incentive. While it is premature to draw conclusions about the national regulatory mechanisms, the Nordic cross border regulation through voluntary arrangements under the auspices of NORDEL provides a good example of an arrangement that is useful when implementation of a formal regulatory regime across different jurisdiction is not possible

  11. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  12. Reliability analysis using network simulation

    International Nuclear Information System (INIS)

    Engi, D.

    1985-01-01

    The models that can be used to provide estimates of the reliability of nuclear power systems operate at many different levels of sophistication. The least-sophisticated models treat failure processes that entail only time-independent phenomena (such as demand failure). More advanced models treat processes that also include time-dependent phenomena such as run failure and possibly repair. However, many of these dynamic models are deficient in some respects because they either disregard the time-dependent phenomena that cannot be expressed in closed-form analytic terms or because they treat these phenomena in quasi-static terms. The next level of modeling requires a dynamic approach that incorporates not only procedures for treating all significant time-dependent phenomena but also procedures for treating these phenomena when they are conditionally linked or characterized by arbitrarily selected probability distributions. The level of sophistication that is required is provided by a dynamic, Monte Carlo modeling approach. A computer code that uses a dynamic, Monte Carlo modeling approach is Q-GERT (Graphical Evaluation and Review Technique - with Queueing), and the present study had demonstrated the feasibility of using Q-GERT for modeling time-dependent, unconditionally and conditionally linked phenomena that are characterized by arbitrarily selected probability distributions

  13. Human reliability data collection and modelling

    International Nuclear Information System (INIS)

    1991-09-01

    The main purpose of this document is to review and outline the current state-of-the-art of the Human Reliability Assessment (HRA) used for quantitative assessment of nuclear power plants safe and economical operation. Another objective is to consider Human Performance Indicators (HPI) which can alert plant manager and regulator to departures from states of normal and acceptable operation. These two objectives are met in the three sections of this report. The first objective has been divided into two areas, based on the location of the human actions being considered. That is, the modelling and data collection associated with control room actions are addressed first in chapter 1 while actions outside the control room (including maintenance) are addressed in chapter 2. Both chapters 1 and 2 present a brief outline of the current status of HRA for these areas, and major outstanding issues. Chapter 3 discusses HPI. Such performance indicators can signal, at various levels, changes in factors which influence human performance. The final section of this report consists of papers presented by the participants of the Technical Committee Meeting. A separate abstract was prepared for each of these papers. Refs, figs and tabs

  14. Modeling patients' acceptance of provider-delivered e-health.

    Science.gov (United States)

    Wilson, E Vance; Lankton, Nancy K

    2004-01-01

    Health care providers are beginning to deliver a range of Internet-based services to patients; however, it is not clear which of these e-health services patients need or desire. The authors propose that patients' acceptance of provider-delivered e-health can be modeled in advance of application development by measuring the effects of several key antecedents to e-health use and applying models of acceptance developed in the information technology (IT) field. This study tested three theoretical models of IT acceptance among patients who had recently registered for access to provider-delivered e-health. An online questionnaire administered items measuring perceptual constructs from the IT acceptance models (intrinsic motivation, perceived ease of use, perceived usefulness/extrinsic motivation, and behavioral intention to use e-health) and five hypothesized antecedents (satisfaction with medical care, health care knowledge, Internet dependence, information-seeking preference, and health care need). Responses were collected and stored in a central database. All tested IT acceptance models performed well in predicting patients' behavioral intention to use e-health. Antecedent factors of satisfaction with provider, information-seeking preference, and Internet dependence uniquely predicted constructs in the models. Information technology acceptance models provide a means to understand which aspects of e-health are valued by patients and how this may affect future use. In addition, antecedents to the models can be used to predict e-health acceptance in advance of system development.

  15. Constructing the Best Reliability Data for the Job

    Science.gov (United States)

    Kleinhammer, R. K.; Kahn, J. C.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  16. Constructing the "Best" Reliability Data for the Job

    Science.gov (United States)

    DeMott, D. L.; Kleinhammer, R. K.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  17. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  18. Influence of reliability of the relay protection to the whole reliability of electric power systems

    International Nuclear Information System (INIS)

    Stojanovski, Ljupcho I.

    2001-01-01

    The influence of the reliability of the elements of relay protection up today analyses of the reliability on electric power systems, very rare has been taken into consideration, in other words, in these analyses it is assumed that the reliability of the protection has value one. In this work an attempt is that through modelling of individual types of protection of the elements of high-voltage systems to make calculation to the influence of the reliability of the relay protection on the total reliability of the high-voltage systems.(Author)

  19. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    Science.gov (United States)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  20. Reliability and radiation tolerance of robots for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Lauridsen, K [Risoe National Lab. (Denmark); Decreton, M [SCK.CEN (Belgium); Seifert, C C [Siemens AG (Germany); Sharp, R [AEA Technology (United Kingdom)

    1996-10-01

    The reliability of a robot for nuclear applications will be affected by environmental factors such as dust, water, vibrations, heat, and, in particular, ionising radiation. The present report describes the work carried out in a project addressing the reliability and radiation tolerance of such robots. A widely representative range of components and materials has been radiation tested and the test results have been collated in a database along with data provided by the participants from earlier work and data acquired from other sources. A radiation effects guide has been written for the use by designers of electronic equipment for robots. A generic reliability model has been set up together with generic failure strategies, forming the basis for specific reliability modelling carried out in other projects. Modelling tools have been examined and developed for the prediction of the performance of electronic circuits subjected to radiation. Reports have been produced dealing with the prediction and detection of upcoming failures in electronic systems. Operational experience from the use of robots in radiation work in various contexts has been compiled in a report, and another report has been written on cost/benefit considerations about the use of robots. Also the possible impact of robots on the safety of the surrounding plant has been considered and reported. (au) 16 ills., 236 refs.

  1. Reliability and radiation tolerance of robots for nuclear applications

    International Nuclear Information System (INIS)

    Lauridsen, K.; Decreton, M.; Seifert, C.C.; Sharp, R.

    1996-10-01

    The reliability of a robot for nuclear applications will be affected by environmental factors such as dust, water, vibrations, heat, and, in particular, ionising radiation. The present report describes the work carried out in a project addressing the reliability and radiation tolerance of such robots. A widely representative range of components and materials has been radiation tested and the test results have been collated in a database along with data provided by the participants from earlier work and data acquired from other sources. A radiation effects guide has been written for the use by designers of electronic equipment for robots. A generic reliability model has been set up together with generic failure strategies, forming the basis for specific reliability modelling carried out in other projects. Modelling tools have been examined and developed for the prediction of the performance of electronic circuits subjected to radiation. Reports have been produced dealing with the prediction and detection of upcoming failures in electronic systems. Operational experience from the use of robots in radiation work in various contexts has been compiled in a report, and another report has been written on cost/benefit considerations about the use of robots. Also the possible impact of robots on the safety of the surrounding plant has been considered and reported. (au) 16 ills., 236 refs

  2. Reliability analysis based on the losses from failures.

    Science.gov (United States)

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  3. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane

    DEFF Research Database (Denmark)

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B.

    2014-01-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the pr......Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose...... subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable....

  4. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance.

    Science.gov (United States)

    Raymond, G M; Bassingthwaighte, J B

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  5. Reliability Prediction Approaches For Domestic Intelligent Electric Energy Meter Based on IEC62380

    Science.gov (United States)

    Li, Ning; Tong, Guanghua; Yang, Jincheng; Sun, Guodong; Han, Dongjun; Wang, Guixian

    2018-01-01

    The reliability of intelligent electric energy meter is a crucial issue considering its large calve application and safety of national intelligent grid. This paper developed a procedure of reliability prediction for domestic intelligent electric energy meter according to IEC62380, especially to identify the determination of model parameters combining domestic working conditions. A case study was provided to show the effectiveness and validation.

  6. Joint interval reliability for Markov systems with an application in transmission line reliability

    International Nuclear Information System (INIS)

    Csenki, Attila

    2007-01-01

    We consider Markov reliability models whose finite state space is partitioned into the set of up states U and the set of down states D . Given a collection of k disjoint time intervals I l =[t l ,t l +x l ], l=1,...,k, the joint interval reliability is defined as the probability of the system being in U for all time instances in I 1 union ... union I k . A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively

  7. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    Science.gov (United States)

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  8. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waler, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1977-01-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure-rate parameter, lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this paper is to present a methodology which can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate, lambda, simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) = 0.50 and P(lambda less than 1.0 x 10 -5 ) = 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure-rate percentiles illustrated above, one can use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) = 0.50 and P(R(t 0 ) less than 0.99999) = 0.95 for some operating time t 0 . Also, the paper includes graphs for selected percentiles which assist an engineer in applying the methodology

  9. Two-terminal reliability of a mobile ad hoc network under the asymptotic spatial distribution of the random waypoint model

    International Nuclear Information System (INIS)

    Chen, Binchao; Phillips, Aaron; Matis, Timothy I.

    2012-01-01

    The random waypoint (RWP) mobility model is frequently used in describing the movement pattern of mobile users in a mobile ad hoc network (MANET). As the asymptotic spatial distribution of nodes under a RWP model exhibits central tendency, the two-terminal reliability of the MANET is investigated as a function of the source node location. In particular, analytical expressions for one and two hop connectivities are developed as well as an efficient simulation methodology for two-terminal reliability. A study is then performed to assess the effect of nodal density and network topology on network reliability.

  10. Reliability data book

    International Nuclear Information System (INIS)

    Bento, J.P.; Boerje, S.; Ericsson, G.; Hasler, A.; Lyden, C.O.; Wallin, L.; Poern, K.; Aakerlund, O.

    1985-01-01

    The main objective for the report is to improve failure data for reliability calculations as parts of safety analyses for Swedish nuclear power plants. The work is based primarily on evaluations of failure reports as well as information provided by the operation and maintenance staff of each plant. In the report are presented charts of reliability data for: pumps, valves, control rods/rod drives, electrical components, and instruments. (L.E.)

  11. A probabilistic approach to safety/reliability of space nuclear power systems

    International Nuclear Information System (INIS)

    Medford, G.; Williams, K.; Kolaczkowski, A.

    1989-01-01

    An ongoing effort is investigating the feasibility of using probabilistic risk assessment (PRA) modeling techniques to construct a living model of a space nuclear power system. This is being done in conjunction with a traditional reliability and survivability analysis of the SP-100 space nuclear power system. The initial phase of the project consists of three major parts with the overall goal of developing a top-level system model and defining initiating events of interest for the SP-100 system. The three major tasks were performing a traditional survivability analysis, performing a simple system reliability analysis, and constructing a top-level system fault-tree model. Each of these tasks and their interim results are discussed in this paper. Initial results from the study support the conclusion that PRA modeling techniques can provide a valuable design and decision-making tool for space reactors. The ability of the model to rank and calculate relative contributions from various failure modes allows design optimization for maximum safety and reliability. Future efforts in the SP-100 program will see data development and quantification of the model to allow parametric evaluations of the SP-100 system. Current efforts have shown the need for formal data development and test programs within such a modeling framework

  12. Reliability prediction of large fuel cell stack based on structure stress analysis

    Science.gov (United States)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  13. Structural reliability analysis based on the cokriging technique

    International Nuclear Information System (INIS)

    Zhao Wei; Wang Wei; Dai Hongzhe; Xue Guofeng

    2010-01-01

    Approximation methods are widely used in structural reliability analysis because they are simple to create and provide explicit functional relationships between the responses and variables in stead of the implicit limit state function. Recently, the kriging method which is a semi-parameter interpolation technique that can be used for deterministic optimization and structural reliability has gained popularity. However, to fully exploit the kriging method, especially in high-dimensional problems, a large number of sample points should be generated to fill the design space and this can be very expensive and even impractical in practical engineering analysis. Therefore, in this paper, a new method-the cokriging method, which is an extension of kriging, is proposed to calculate the structural reliability. cokriging approximation incorporates secondary information such as the values of the gradients of the function being approximated. This paper explores the use of the cokriging method for structural reliability problems by comparing it with the Kriging method based on some numerical examples. The results indicate that the cokriging procedure described in this work can generate approximation models to improve on the accuracy and efficiency for structural reliability problems and is a viable alternative to the kriging.

  14. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  15. Characterization of reliability of spike timing in spinal interneurons during oscillating inputs

    DEFF Research Database (Denmark)

    Beierholm, Ulrik; Nielsen, Carsten D.; Ryge, Jesper

    2001-01-01

    that interneurons can respond with a high reliability of spike timing, but only by combining fast and slow oscillations is it possible to obtain a high reliability of firing during rhythmic locomotor movements. Theoretical analysis of the rotation number provided new insights into the mechanism for obtaining......The spike timing in rhythmically active interneurons in the mammalian spinal locomotor network varies from cycle to cycle. We tested the contribution from passive membrane properties to this variable firing pattern, by measuring the reliability of spike timing, P, in interneurons in the isolated...... the analysis we used a leaky integrate and fire (LIF) model with a noise term added. The LIF model was able to reproduce the experimentally observed properties of P as well as the low-pass properties of the membrane. The LIF model enabled us to use the mathematical theory of nonlinear oscillators to analyze...

  16. An Open Modelling Approach for Availability and Reliability of Systems - OpenMARS

    CERN Document Server

    Penttinen, Jussi-Pekka; Gutleber, Johannes

    2018-01-01

    This document introduces and gives specification for OpenMARS, which is an open modelling approach for availability and reliability of systems. It supports the most common risk assessment and operation modelling techniques. Uniquely OpenMARS allows combining and connecting models defined with different techniques. This ensures that a modeller has a high degree of freedom to accurately describe the modelled system without limitations imposed by an individual technique. Here the OpenMARS model definition is specified with a tool independent tabular format, which supports managing models developed in a collaborative fashion. Origin of our research is in Future Circular Collider (FCC) study, where we developed the unique features of our concept to model the availability and luminosity production of particle colliders. We were motivated to describe our approach in detail as we see potential further applications in performance and energy efficiency analyses of large scientific infrastructures or industrial processe...

  17. Structural reliability analysis applied to pipeline risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, M. [GL Industrial Services, Loughborough (United Kingdom); Mendes, Renato F.; Donato, Guilherme V.P. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Quantitative Risk Assessment (QRA) of pipelines requires two main components to be provided. These are models of the consequences that follow from some loss of containment incident, and models for the likelihood of such incidents occurring. This paper describes how PETROBRAS have used Structural Reliability Analysis for the second of these, to provide pipeline- and location-specific predictions of failure frequency for a number of pipeline assets. This paper presents an approach to estimating failure rates for liquid and gas pipelines, using Structural Reliability Analysis (SRA) to analyze the credible basic mechanisms of failure such as corrosion and mechanical damage. SRA is a probabilistic limit state method: for a given failure mechanism it quantifies the uncertainty in parameters to mathematical models of the load-resistance state of a structure and then evaluates the probability of load exceeding resistance. SRA can be used to benefit the pipeline risk management process by optimizing in-line inspection schedules, and as part of the design process for new construction in pipeline rights of way that already contain multiple lines. A case study is presented to show how the SRA approach has recently been used on PETROBRAS pipelines and the benefits obtained from it. (author)

  18. Software reliability studies

    Science.gov (United States)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  19. Reliability analysis techniques in power plant design

    International Nuclear Information System (INIS)

    Chang, N.E.

    1981-01-01

    An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)

  20. Usage models in reliability assessment of software-based systems

    Energy Technology Data Exchange (ETDEWEB)

    Haapanen, P.; Pulkkinen, U. [VTT Automation, Espoo (Finland); Korhonen, J. [VTT Electronics, Espoo (Finland)

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.).