#### Sample records for reliability comparative analysis

1. Comparative Study of Reliability Analysis Methods for Discrete Bimodal Information

International Nuclear Information System (INIS)

The distribution of a response usually depends on the distribution of a variable. When the distribution of a variable has two different modes, the response also follows a distribution with two different modes. In most reliability analysis methods, the number of modes is irrelevant, but not the type of distribution. However, in actual problems, because information is often provided with two or more modes, it is important to estimate the distributions with two or more modes. Recently, some reliability analysis methods have been suggested for bimodal distributions. In this paper, we review some methods such as the Akaike information criterion (Aic) and maximum entropy principle (Me) and compare them with the Monte Carlo simulation (MRCS) using mathematical examples with two different modes

2. Comparative Study of Reliability Analysis Methods for Discrete Bimodal Information

Energy Technology Data Exchange (ETDEWEB)

Lim, Woochul; Jang, Junyong; Lee, Taehee [Hanyang Univ., Seoul (Korea, Republic of)

2013-07-15

The distribution of a response usually depends on the distribution of a variable. When the distribution of a variable has two different modes, the response also follows a distribution with two different modes. In most reliability analysis methods, the number of modes is irrelevant, but not the type of distribution. However, in actual problems, because information is often provided with two or more modes, it is important to estimate the distributions with two or more modes. Recently, some reliability analysis methods have been suggested for bimodal distributions. In this paper, we review some methods such as the Akaike information criterion (Aic) and maximum entropy principle (Me) and compare them with the Monte Carlo simulation (MRCS) using mathematical examples with two different modes.

3. Small nuclear power reactor emergency electric power supply system reliability comparative analysis

International Nuclear Information System (INIS)

This work presents an analysis of the reliability of the emergency power supply system, of a small size nuclear power reactor. Three different configurations are investigated and their reliability analyzed. The fault tree method is used as the main tool of analysis. The work includes a bibliographic review of emergency diesel generator reliability and a discussion of the design requirements applicable to emergency electrical systems. The influence of common cause failure influences is considered using the beta factor model. The operator action is considered using human failure probabilities. A parametric analysis shows the strong dependence between the reactor safety and the loss of offsite electric power supply. It is also shown that common cause failures can be a major contributor to the system reliability. (author)

4. Orbiter Autoland reliability analysis

Science.gov (United States)

Welch, D. Phillip

1993-01-01

The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

5. Power electronics reliability analysis.

Energy Technology Data Exchange (ETDEWEB)

Smith, Mark A.; Atcitty, Stanley

2009-12-01

This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

6. Human reliability analysis

International Nuclear Information System (INIS)

The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

7. Multidisciplinary System Reliability Analysis

Science.gov (United States)

Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

2001-01-01

The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

8. Risk analysis and reliability

International Nuclear Information System (INIS)

Mathematical foundations of risk analysis are addressed. The importance of having the same probability space in order to compare different experiments is pointed out. Then the following topics are discussed: consequences as random variables with infinite expectations; the phenomenon of rare events; series-parallel systems and different kinds of randomness that could be imposed on such systems; and the problem of consensus of estimates of expert opinion

9. Integrating reliability analysis and design

International Nuclear Information System (INIS)

This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems

10. Reliability analysis of containment strength

International Nuclear Information System (INIS)

The Sequoyah and McGuire ice condenser containment vessels were designed to withstand pressures in the range of 12 to 15 psi. Since pressures of the order of 28 psi were recorded during the Three Mile Island incident, a need exists to more accurately define the strength of these vessels. A best estimate and uncertainty assessment of the strength of the containments were performed by applying the second moment reliability method. Material and geometric properties were supplied by the plant owners. A uniform static internal pressure was assumed. Gross deformation was taken as the failure criterion. Both approximate and finite element analyses were performed on the axisymmetric containment structure and the penetrations. The predicted strength for the Sequoyah vessel is 60 psi with a standard deviation of 8 psi. For McGuire, the mean and standard deviations are 84 psi and 12 psi, respectively. In an Addendum, results by others are summarized and compared and a preliminary dynamic analysis is presented

11. Reliability Analysis of Wind Turbines

DEFF Research Database (Denmark)

Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

2008-01-01

In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced ...

12. Reliability analysis of Angra I safety systems

International Nuclear Information System (INIS)

An extensive reliability analysis of some safety systems of Angra I, are presented. The fault tree technique, which has been successfully used in most reliability studies of nuclear safety systems performed to date is employed. Results of a quantitative determination of the unvailability of the accumulator and the containment spray injection systems are presented. These results are also compared to those reported in WASH-1400. (E.G.)

13. Analysis on reliability aspects of wind power

Energy Technology Data Exchange (ETDEWEB)

Mabel, M. Carolin [Department of Electrical and Electronics Engineering, St. Xavier' s Catholic College of Engineering, Chunkankadai, Tamilnadu 629003 (India); Raj, R. Edwin [Department of Mechanical Engineering, St. Xavier' s Catholic College of Engineering, Chunkankadai, Tamilnadu 629003 (India); Fernandez, E. [Department of Electrical Engineering, Indian Institute of Technology Roorkee, Roorkee, Uttranchal 247667 (India)

2011-02-15

The analysis on reliability aspects of wind power finds more significance as compared to that in conventional power generation systems. In spite of the intermittent and variable nature of wind energy, it can be usefully tapped to generate electrical power for meeting part of the energy demand of the population. The present paper undertakes an analysis on the reliable aspects of wind energy conversion system and applies to seven wind farms in Muppandal region in India. For the purpose of analysis, two reliability indices are used; one is coined as the period during which the expected wind energy is not supplied and the other one is the loss of load expectation index which analyzes the degree of matching of wind farm power generation with the load model. The study also investigates the effect of increasing the hub height of wind energy conversion systems. (author)

14. Computational methods for efficient structural reliability and reliability sensitivity analysis

Science.gov (United States)

Wu, Y.-T.

1993-01-01

This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

15. Analysis tools for reliability databases

International Nuclear Information System (INIS)

This report outlines the work performed at Risoe, under contract with the Swedish Nuclear Power Inspectorate, with the goal to develop analysis tools for reliability databases, that can suit the information needs of the users of the TUD (Reliability/Maintenance/Operation) database, used at 12 nuclear power plants in Sweden and 2 in Finland. The TUD database stores operating experience data from the failure reports, that describe failures and repair on a large part of the equipment of the plants. Furthermore, the TUD contains background data on operating conditions, design, maintenance and test programs on the equipment and registers the changes in operating modes of each plant. Since 1993 the TUD is structured as a multi-user relational database. The analysis tools developed in this work are the result of the following analysis steps: 1. Investigate and select data 2. Make simple plots of the data 3. Analyze the data with statistical methods, including analysis of trend and dependency 4. Combine and implement these three steps in a prototype RDB with a simple user-interface. The resulting user-interface of the prototype RDB developed in the work, guides the user through the following steps: 4a. Build a population of sockets (sub-components or component level), 4b. Select the time-window and the failure events, 4c. Select the analysis tools to be incorporated in the report, 4d. Adjust the default report and print the report. The prototype RDB developed in this work, shows that when the proper analysis tool is installed, the TUD database can help its users in identifying possible common cause failures and trends in reliability and costs of a population of component sockets

16. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

Science.gov (United States)

Rais-Rohani, Masoud

2001-01-01

This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

17. Software reliability analysis in probabilistic risk analysis

International Nuclear Information System (INIS)

Probabilistic Risk Analysis (PRA) is a tool which can reveal shortcomings of the NPP design in general. PRA analysts have not had sufficient guiding principles in modelling particular digital components malfunctions. Digital I and C systems are mostly analysed simply and the software reliability estimates are engineering judgments often lacking a proper justification. The OECD/NEA Working Group RISK's task DIGREL develops a taxonomy of failure modes of digital I and C systems. The EU FP7 project HARMONICS develops software reliability estimation method based on an analytic approach and Bayesian belief network. (author)

18. Human Reliability Analysis: session summary

International Nuclear Information System (INIS)

The use of Human Reliability Analysis (HRA) to identify and resolve human factors issues has significantly increased over the past two years. Today, utilities, research institutions, consulting firms, and the regulatory agency have found a common application of HRA tools and Probabilistic Risk Assessment (PRA). The ''1985 IEEE Third Conference on Human Factors and Power Plants'' devoted three sessions to the discussion of these applications and a review of the insights so gained. This paper summarizes the three sessions and presents those common conclusions that were discussed during the meeting. The paper concludes that session participants supported the use of an adequately documented ''living PRA'' to address human factors issues in design and procedural changes, regulatory compliance, and training and that the techniques can produce cost effective qualitative results that are complementary to more classical human factors methods

19. Reliability Analysis of Adhesive Bonded Scarf Joints

DEFF Research Database (Denmark)

Kimiaeifar, Amin; Toft, Henrik Stensgaard; Lund, Erik; Thomsen, Ole Thybo; Sørensen, John Dalsgaard

2012-01-01

A probabilistic model for the reliability analysis of adhesive bonded scarfed lap joints subjected to static loading is developed. It is representative for the main laminate in a wind turbine blade subjected to flapwise bending. The structural analysis is based on a three dimensional (3D) finite element analysis (FEA). For the reliability analysis a design equation is considered which is related to a deterministic code-based design equation where reliability is secured by partial safety factors ...

20. How to assess and compare inter-rater reliability, agreement and correlation of ratings: an exemplary analysis of mother-father and parent-teacher expressive vocabulary rating pairs

Directory of Open Access Journals (Sweden)

MargaritaStolarova

2014-06-01

Full Text Available This report has two main purposes. First, we combine well-known analytical approaches to conduct a comprehensive assessment of agreement and correlation of rating-pairs and to dis-entangle these often confused concepts, providing a best-practice example on concrete data and a tutorial for future reference. Second, we explore whether a screening questionnaire deve-loped for use with parents can be reliably employed with daycare teachers when assessing early expressive vocabulary. A total of 53 vocabulary rating pairs (34 parent-teacher and 19 mother-father pairs collected for two-year-old children (12 bilingual are evaluated. First, inter-rater reliability both within and across subgroups is assessed using the intra-class correlation coefficient (ICC. Next, based on this analysis of reliability and on the test-retest reliability of the employed tool, inter-rater agreement is analyzed, magnitude and direction of rating differences are considered. Finally, Pearson correlation coefficients of standardized vocabulary scores are calculated and compared across subgroups. The results underline the necessity to distinguish between reliability measures, agreement and correlation. They also demonstrate the impact of the employed reliability on agreement evaluations. This study provides evidence that parent-teacher ratings of children’s early vocabulary can achieve agreement and correlation comparable to those of mother-father ratings on the assessed vocabulary scale. Bilingualism of the evaluated child decreased the likelihood of raters’ agreement. We conclude that future reports of agree-ment, correlation and reliability of ratings will benefit from better definition of terms and stricter methodological approaches. The methodological tutorial provided here holds the potential to increase comparability across empirical reports and can help improve research practices and knowledge transfer to educational and therapeutic settings.

1. Reliable statistical fuel rod analysis

International Nuclear Information System (INIS)

In the fuel rod design process it must be shown that the fuel rod response to the physical processes occurring in the rod during irradiation is limited to allowable values. The design can either be performed in a deterministic, and hence conservative way, or with statistical methods. Using the former way the permissible operation time may be underestimated impeding a full exploitation of the burn up capability of the fuel rod. In contrast, by adequately heeding the scattering of the parameters influencing the fuel rod response . or reactions . the statistical method allows a margin for extending the operation period. A special variant of the statistical methods is described and validated. The method is called semi statistical as it encompasses a statistical and a deterministic branch. It starts with computing rank correlation coefficients from Monte Carlo simulations, proceeds with converting them into quantiles and extracting specific design input values from the parameter distributions and eventually terminates by applying the sets of parameter values to deterministic fuel rod computations. The benefits of this reliable and straightforward analysis procedure are demonstrated

2. Power system reliability analysis using fault trees

International Nuclear Information System (INIS)

The power system reliability analysis method is developed from the aspect of reliable delivery of electrical energy to customers. The method is developed based on the fault tree analysis, which is widely applied in the Probabilistic Safety Assessment (PSA). The method is adapted for the power system reliability analysis. The method is developed in a way that only the basic reliability parameters of the analysed power system are necessary as an input for the calculation of reliability indices of the system. The modeling and analysis was performed on an example power system consisting of eight substations. The results include the level of reliability of current power system configuration, the combinations of component failures resulting in a failed power delivery to loads, and the importance factors for components and subsystems. (author)

3. A comparative method for improving the reliability of brittle components

International Nuclear Information System (INIS)

Calculating the absolute reliability built in a product is often an extremely difficult task because of the complexity of the physical processes and physical mechanisms underlying the failure modes, the complex influence of the environment and the operational loads, the variability associated with reliability-critical design parameters and the non-robustness of the prediction models. Predicting the probability of failure of loaded components with complex shape for example is associated with uncertainty related to: the type of existing flaws initiating fracture, the size distributions of the flaws, the locations and the orientations of the flaws and the microstructure and its local properties. Capturing these types of uncertainty, necessary for a correct prediction of the reliability of components is a formidable task which does not need to be addressed if a comparative reliability method is employed, especially if the focus is on reliability improvement. The new comparative method for improving the resistance to failure initiated by flaws proposed here is based on an assumed failure criterion, an equation linking the probability that a flaw will be critical with the probability of failure associated with the component and a finite element solution for the distribution of the principal stresses in the loaded component. The probability that a flaw will be critical is determined directly, after a finite number of steps equal to the number of finite elements into which the component is divided. An advantage of the proposed comparative method for improving the resistance to failure initiated by flaws is that it does not rely on a Monte Carlo simulation and does not depend on knowledge of the size distribution of the flaws and the material properties. This essentially eliminates uncertainty associated with the material properties and the population of flaws. On the basis of a theoretical analysis we also show that, contrary to the common belief, in general, for non-interacting flaws randomly located in a stressed volume, the distribution of the minimum failure stress is not necessarily described by a Weibull distribution. For the simple case of a single group of flaws all of which become critical beyond a particular threshold value for example, the Weibull distribution fails to predict correctly the probability of failure. If in a particular load range, no new critical flaws are created by increasing the applied stress, the Weibull distribution also fails to predict correctly the probability of failure of the component. In these cases however, the probability of failure is correctly predicted by the suggested alternative equation. The suggested equation is the correct mathematical formulation of the weakest-link concept related to random flaws in a stressed volume. The equation does not require any assumption concerning the physical nature of the flaws and the physical mechanism of failure and can be applied in any situation of locally initiated failure by non-interacting entities

4. Reliability analysis of software based safety functions

International Nuclear Information System (INIS)

The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)

5. Reliability Analysis of Money Habitudes

Science.gov (United States)

Delgadillo, Lucy M.; Bushman, Brittani S.

2015-01-01

Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

6. Integrated Methodology for Software Reliability Analysis

Directory of Open Access Journals (Sweden)

Marian Pompiliu CRISTESCU

2012-01-01

Full Text Available The most used techniques to ensure safety and reliability of the systems are applied together as a whole, and in most cases, the software components are usually overlooked or to little analyzed. The present paper describes the applicability of fault trees analysis software system, analysis defined as Software Fault Tree Analysis (SFTA, fault trees are evaluated using binary decision diagrams, all of these being integrated and used with help from Java library reliability.

7. Analysis of information security reliability: A tutorial

International Nuclear Information System (INIS)

This article presents a concise reliability analysis of network security abstracted from stochastic modeling, reliability, and queuing theories. Network security analysis is composed of threats, their impacts, and recovery of the failed systems. A unique framework with a collection of the key reliability models is presented here to guide the determination of the system reliability based on the strength of malicious acts and performance of the recovery processes. A unique model, called Attack-obstacle model, is also proposed here for analyzing systems with immunity growth features. Most computer science curricula do not contain courses in reliability modeling applicable to different areas of computer engineering. Hence, the topic of reliability analysis is often too diffuse to most computer engineers and researchers dealing with network security. This work is thus aimed at shedding some light on this issue, which can be useful in identifying models, their assumptions and practical parameters for estimating the reliability of threatened systems and for assessing the performance of recovery facilities. It can also be useful for the classification of processes and states regarding the reliability of information systems. Systems with stochastic behaviors undergoing queue operations and random state transitions can also benefit from the approaches presented here. - Highlights: • A concise survey and tutorial in model-based reliability analysis applicable to information security. • A framework of key modeling approaches for assessing reliability of networked systems. • The framework facilitates quantitative risk assessment tasks guided by stochastic modeling and queuing theory. • Evaluation of approaches and models for modeling threats, failures, impacts, and recovery analysis of information systems

8. How useful and reliable are disaster databases in the context of climate and global change? A comparative case study analysis in Peru

Directory of Open Access Journals (Sweden)

C. Huggel

2014-06-01

Full Text Available Loss and damage caused by weather and climate related disasters have increased over the past decades, and growing exposure and wealth have been identified as main drivers of this increase. Disaster databases are a primary tool for the analysis of disaster characteristics and trends at global or national scales, and support disaster risk reduction and climate change adaptation. However, the quality, consistency and completeness of different disaster databases are highly variable. Even though such variation critically influences the outcome of any study, comparative analyses of different disaster databases are still rare to date. Furthermore, there is an unequal geographic distribution of current disaster trend studies, with developing countries being under-represented. Here, we analyze three different disaster databases for the developing country context of Peru; a global database (EM-DAT, a regional Latin American (DesInventar and a national database (SINPAD. The analysis is performed across three dimensions, (1 spatial scales, from local to regional (provincial and national scale; (2 time scales, from single events to decadal trends; and (3 disaster categories and metrics, including the number of disaster occurrence, and damage metrics such as people killed and affected. Results show limited changes in disaster occurrence in the Cusco and Apurímac regions in southern Peru over the past four decades, but strong trends in people affected at the national scale. We furthermore found large variations of the disaster parameters studied over different spatial and temporal scales, depending on the disaster database analyzed. We conclude and recommend that the type, method and source of documentation should be carefully evaluated for any analysis of disaster databases; reporting criteria should be improved and documentation efforts strengthened.

9. How useful and reliable are disaster databases in the context of climate and global change? A comparative case study analysis in Peru

Science.gov (United States)

Huggel, C.; Raissig, A.; Rohrer, M.; Romero, G.; Diaz, A.; Salzmann, N.

2015-03-01

Damage caused by weather- and climate-related disasters have increased over the past decades, and growing exposure and wealth have been identified as main drivers of this increase. Disaster databases are a primary tool for the analysis of disaster characteristics and trends at global or national scales, and they support disaster risk reduction and climate change adaptation. However, the quality, consistency and completeness of different disaster databases are highly variable. Even though such variation critically influences the outcome of any study, comparative analyses of different databases are still rare to date. Furthermore, there is an unequal geographic distribution of current disaster trend studies, with developing countries being underrepresented. Here, we analyze three different disaster databases in the developing-country context of Peru: a global database (Emergency Events Database: EM-DAT), a multinational Latin American database (DesInventar) and a national database (Peruvian National Information System for the Prevention of Disasters: SINPAD). The analysis is performed across three dimensions: (1) spatial scales, from local to regional (provincial) and national scale; (2) timescales, from single events to decadal trends; and (3) disaster categories and metrics, including the number of single disaster event occurrence, or people killed and affected. Results show limited changes in disaster occurrence in the Cusco and ApurÍmac regions in southern Peru over the past four decades but strong positive trends in people affected at the national scale. We furthermore found large variations of the disaster metrics studied over different spatial and temporal scales, depending on the disaster database analyzed. We conclude and recommend that the type, method and source of documentation should be carefully evaluated for any analysis of disaster databases; reporting criteria should be improved and documentation efforts strengthened.

10. Space Mission Human Reliability Analysis (HRA) Project

Data.gov (United States)

National Aeronautics and Space Administration — The purpose of this project is to extend current ground-based Human Reliability Analysis (HRA) techniques to a long-duration, space-based tool to more effectively...

11. A comparative study on the HW reliability assessment methods for digital I and C equipment

Energy Technology Data Exchange (ETDEWEB)

Jung, Hoan Sung; Sung, T. Y.; Eom, H. S.; Park, J. K.; Kang, H. G.; Lee, G. Y. [Korea Atomic Energy Research Institute, Taejeon (Korea); Kim, M. C. [Korea Advanced Institute of Science and Technology, Taejeon (Korea); Jun, S. T. [KHNP, Taejeon (Korea)

2002-03-01

It is necessary to predict or to evaluate the reliability of electronic equipment for the probabilistic safety analysis of digital instrument and control equipment. But most databases for the reliability prediction have no data for the up-to-date equipment and the failure modes are not classified. The prediction results for the specific component show different values according to the methods and databases. For boards and systems each method shows different values than others also. This study is for reliability prediction of PDC system for Wolsong NPP1 as a digital I and C equipment. Various reliability prediction methods and failure databases are used in calculation of the reliability to compare the effects of sensitivity and accuracy of each model and database. Many considerations for the reliability assessment of digital systems are derived with the results of this study. 14 refs., 19 figs., 15 tabs. (Author)

12. Production Facility System Reliability Analysis Report

Energy Technology Data Exchange (ETDEWEB)

Dale, Crystal Buchanan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2015-10-06

This document describes the reliability, maintainability, and availability (RMA) modeling of the Los Alamos National Laboratory (LANL) design for the Closed Loop Helium Cooling System (CLHCS) planned for the NorthStar accelerator-based 99Mo production facility. The current analysis incorporates a conceptual helium recovery system, beam diagnostics, and prototype control system into the reliability analysis. The results from the 1000 hr blower test are addressed.

13. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

Science.gov (United States)

Warne, Russell

2008-01-01

Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

14. Reliability analysis of redundant-path interconnection networks

Energy Technology Data Exchange (ETDEWEB)

Varma, A. (International Business Machines Corp., Yorktown Heights, NY (USA). Thomas J. Watson Research Center); Raghavendra, C.S. (Univ. of Southern Calif., Los Angeles, CA (US))

1989-04-01

The reliability of some redundant-path multistage interconnection networks is characterized. The classes of networks are: the Generalized Indra Networks, Merged Delta Networks, and Augmented C-Networks. The reliability measures are: terminal reliability and broadcast reliability. Symbolic expressions are derived for these reliability measures in terms of component reliabilities. The results are useful in comparing network designs for a given reliability requirement.

15. Integrating Reliability Analysis with a Performance Tool

Science.gov (United States)

Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

1995-01-01

A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

16. Comparing two reliable multicast protocols for mobile computing

Scientific Electronic Library Online (English)

Mateus de Freitas, Ribeiro; Markus, Endler.

2003-04-01

Full Text Available As networks with mobile devices becorne commonplace, many new applications for those networks arisc, including some that require coordination among groups of mobile clients. One basic tool for implementing coordination is reliable multicast, where delivery of a multicast message is atomic, i.e. cith [...] er all or none of the group members deliver the message. While several multicast protocols have been proposed for mobile networks, only a few works have considered reliable multicats. In this paper we present and compare two protocols based on Two-Phase-Commit that implement reliable multicast for structured mobile networks. Protocol iAM²C is a variant of protocol AM2C that employs a two-level hierarchical location management scheme to locate and route messages to the mobile hosts addressed by a multicast. Although hierarchical location management is not new in the context of mobile and cellular networks, we are unaware of any other work which combines hierarchical location management with protocols for reliable multicast. We have prototyped, simulated and evaluated both protocols using the MobiCS simulation enviromment. Our experiments indicate that despite some overhead incurred by the location management and the additional level of message redirection, iAM2C is more efficient than the AM² C protocol and scales well with the size of the wired network infra-structure.

17. Multi-Disciplinary System Reliability Analysis

Science.gov (United States)

1997-01-01

The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

18. Reliability Analysis of the MSC System

Science.gov (United States)

Kim, Young-Soo; Lee, Do-Kyoung; Lee, Chang-Ho; Woo, Sun-Hee

2003-09-01

MSC (Multi-Spectral Camera) is the payload of KOMPSAT-2, which is being developed for earth imaging in optical and near-infrared region. The design of the MSC is completed and its reliability has been assessed from part level to the MSC system level. The reliability was analyzed in worst case and the analysis results showed that the value complies the required value of 0.9. In this paper, a calculation method of reliability for the MSC system is described, and assessment result is presented and discussed.

19. Swimming pool reactor reliability and safety analysis

International Nuclear Information System (INIS)

A reliability and safety analysis of Swimming Pool Reactor in China Institute of Atomic Energy is done by use of event/fault tree technique. The paper briefly describes the analysis model, analysis code and main results. Meanwhile it also describes the impact of unassigned operation status on safety, the estimation of effectiveness of defense tactics in maintenance against common cause failure, the effectiveness of recovering actions on the system reliability, the comparison of occurrence frequencies of the core damage by use of generic and specific data

20. RHR system reliability analysis of Krsko NPP

International Nuclear Information System (INIS)

In this paper Systems reliability analysis is applied to residual heat Removal System in Krsko NPP. Fault tree method is used. Qualitative analysis of the fault tree was made using FTAP-2 computer code, and quantitative using IMPORT code. results are evaluated and their possible application is given. (author)

1. Comparative risk analysis

International Nuclear Information System (INIS)

In this paper, the risks of various energy systems are discussed considering severe accidents analysis, particularly the probabilistic safety analysis, and probabilistic safety criteria, and the applications of these criteria and analysis. The comparative risk analysis has demonstrated that the largest source of risk in every society is from daily small accidents. Nevertheless, we have to be more concerned about severe accidents. The comparative risk analysis of five different energy systems (coal, oil, gas, LWR and STEC (Solar)) for the public has shown that the main sources of risks are coal and oil. The latest comparative risk study of various energy has been conducted in the USA and has revealed that the number of victims from coal is 42 as many than victims from nuclear. A study for severe accidents from hydro-dams in United States has estimated the probability of dam failures at 1 in 10,000 years and the number of victims between 11,000 and 260,000. The average occupational risk from coal is one fatal accident in 1,000 workers/year. The probabilistic safety analysis is a method that can be used to assess nuclear energy risks, and to analyze the severe accidents, and to model all possible accident sequences and consequences. The 'Fault tree' analysis is used to know the probability of failure of the different systems at each point of accident sequences and to calculate the probability of risks. After calculating the probability of failure, the criteria for judging the numerical results have to be developed, that is the quantitative and qualitative goals. To achieve these goals, several systems have been devised by various countries members of AIEA. The probabilistic safety ana-lysis method has been developed by establishing a computer program permit-ting to know different categories of safety related information. 19 tabs. (author)

2. Reliability modelling and analysis of thermal MEMS

International Nuclear Information System (INIS)

This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions

3. A methodology for reliability analysis in health networks.

Science.gov (United States)

Spyrou, Stergiani; Bamidis, Panagiotis D; Maglaveras, Nicos; Pangalos, George; Pappas, Costas

2008-05-01

A reliability model for a health care domain based on requirement analysis at the early stage of design of regional health network (RHN) is introduced. RHNs are considered as systems supporting the services provided by health units, hospitals, and the regional authority. Reliability assessment in health care domain constitutes a field-of-quality assessment for RHN. A novel approach for predicting system reliability in the early stage of designing RHN systems is presented in this paper. The uppermost scope is to identify the critical processes of an RHN system prior to its implementation. In the methodology, Unified Modeling Language activity diagrams are used to identify megaprocesses at regional level and the customer behavior model graph (CBMG) to describe the states transitions of the processes. CBMG is annotated with: 1) the reliability of each component state and 2) the transition probabilities between states within the scope of the life cycle of the process. A stochastic reliability model (Markov model) is applied to predict the reliability of the business process as well as to identify the critical states and compare them with other processes to reveal the most critical ones. The ultimate benefit of the applied methodology is the design of more reliable components in an RHN system. The innovation of the approach of reliability modeling lies with the analysis of severity classes of failures and the application of stochastic modeling using discrete-time Markov chain in RHNs. PMID:18693505

4. Culture Representation in Human Reliability Analysis

Energy Technology Data Exchange (ETDEWEB)

David Gertman; Julie Marble; Steven Novack

2006-12-01

Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991) cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.

5. How useful and reliable are disaster databases in the context of climate and global change? A comparative case study analysis in Peru

OpenAIRE

Huggel, Christian; Raissig, Annik; Rohrer, Mario; Romero, Gilberto; Diaz, Alfonso; Salzmann, Nadine

2014-01-01

Loss and damage caused by weather and climate related disasters have increased over the past decades, and growing exposure and wealth have been identified as main drivers of this increase. Disaster databases are a primary tool for the analysis of disaster characteristics and trends at global or national scales, and support disaster risk reduction and climate change adaptation. However, the quality, consistency and completeness of different disaster databases are highly variable. Even though s...

6. How useful and reliable are disaster databases in the context of climate and global change? A comparative case study analysis in Peru

OpenAIRE

Huggel, C.; Raissig, A.; Rohrer, M; Romero, G.; Diaz, A.; Salzmann, N.

2014-01-01

Loss and damage caused by weather and climate related disasters have increased over the past decades, and growing exposure and wealth have been identified as main drivers of this increase. Disaster databases are a primary tool for the analysis of disaster characteristics and trends at global or national scales, and support disaster risk reduction and climate change adaptation. However, the quality, consistency and completeness of different disaster databases are highl...

7. A Comparative Analysis

OpenAIRE

Nowak, Eric

1998-01-01

Germany and the United States are generally seen as the two competing systems of corporate governance. In search for a comparative welfare analysis of the financial systems, we are interested in (i) the aggregate value-added of corporate investments in the two countries and in (ii) the interaction of investment and financing decisions. This paper investigates the impact of financing, investment, and dividend decisions on the value of stock corporations in Germany and the US. The methodology i...

8. Event/Time/Availability/Reliability-Analysis Program

Science.gov (United States)

Viterna, L. A.; Hoffman, D. J.; Carr, Thomas

1994-01-01

ETARA is interactive, menu-driven program that performs simulations for analysis of reliability, availability, and maintainability. Written to evaluate performance of electrical power system of Space Station Freedom, but methodology and software applied to any system represented by block diagram. Program written in IBM APL.

9. Reliability Analysis of Structural Timber Systems

DEFF Research Database (Denmark)

SØrensen, John Dalsgaard; Hoffmeyer, P.

2000-01-01

Structural systems like timber trussed rafters and roof elements made of timber can be expected to have some degree of redundancy and nonlinear/plastic behaviour when the loading consists of for example snow or imposed load. In this paper this system effect is modelled and the statistic characteristics of the load-bearing capacity is estimated in the form of a characteristic value and a coefficient of variation. These two values are of primary importance for codes of practice based on the partial safety factor format since the partial safety factor is closely related to the coefficient of variation. In the paper a stochastic model is described for the strength of a single piece of timber taking into account the stochastic variation of the strength and stiffness with length. Also stochastic models for different types of loads are formulated. First, simple representative systems with different types of redundancy and non-linearity are considered. The statistical characteristics of the load bearing capacity are determined by reliability analysis. Next, more complex systems are considered modelling the mechanical behaviour of timber roof elements I stressed skin panels made of timber. Using the above stochastic models, statistical characteristics (distribution function, 5% quantile and coefficient of variation) are determined. Generally, the results show that taking the system effects into account the characteristic load bearing capacity can be increased and the partial safety factor decreased compared to the values obtained if the system effects are not considered.

10. A reliability analysis for the grinding process

OpenAIRE

Tolvanen, Pekka

2011-01-01

This Bachelor’s thesis was made in collaboration with the Service Product Center Espoo of Outotec (Finland) Oy during the spring semester 2011. The main objectives of this thesis were to create a reliability analysis of the mineral enrichment process grinding circuit and to examine the possibilities for the analysis as a company’s new service product. The scope for this thesis was limited by the mandator. As the machinery of the process industry is getting older, the role of maintenance ...

11. Small nuclear power reactor emergency electric power supply system reliability comparative analysis; Analise da confiabilidade do sistema de suprimento de energia eletrica de emergencia de um reator nuclear de pequeno porte

Energy Technology Data Exchange (ETDEWEB)

Bonfietti, Gerson

2003-07-01

This work presents an analysis of the reliability of the emergency power supply system, of a small size nuclear power reactor. Three different configurations are investigated and their reliability analyzed. The fault tree method is used as the main tool of analysis. The work includes a bibliographic review of emergency diesel generator reliability and a discussion of the design requirements applicable to emergency electrical systems. The influence of common cause failure influences is considered using the beta factor model. The operator action is considered using human failure probabilities. A parametric analysis shows the strong dependence between the reactor safety and the loss of offsite electric power supply. It is also shown that common cause failures can be a major contributor to the system reliability. (author)

12. Reflections on the perspectives of reliability analysis

International Nuclear Information System (INIS)

Current reliability technology and availability of reliability data enable meaningful analyses to be carried out for the quantified reliability and safety values of many types of systems. This does not necessarily mean that precise numeric values can be obtained and interpreted. It rather means that answers can be produced within an acceptable degree of accuracy and used as a basis for decision making. Improvements in existing techniques and data are still required and further validation is necessary. However, there appears to be problems in the grey area between deterministic analysis and statistically meaningful probabilistic analysis. These problems relate to what may be described as 'rare' events. Such events may require important considerations both within and outside the practical systems being studied. Three main facets seem to need further attention: methods for recognizing rare events of significance, techniques for describing and enumerating the patterns of rare event occurrences and means for communicating the results of such enumerations. Even with current techniques it is often the lessons learnt about system behaviour which arise from the attempt to quantify reliability and the discipline of thought which comes out of the process of analysis which are more important than the attainment of a precise numeric answer at the end of the day

13. Comparative reliability of cheiloscopy and palatoscopy in human identification

Directory of Open Access Journals (Sweden)

Sharma Preeti

2009-01-01

Full Text Available Background: Establishing a person?s identity in postmortem scenarios can be a very difficult process. Dental records, fingerprint and DNA comparisons are probably the most common techniques used in this context, allowing fast and reliable identification processes. However, under certain circumstances they cannot always be used; sometimes it is necessary to apply different and less known techniques. In forensic identification, lip prints and palatal rugae patterns can lead us to important information and help in a person?s identification. This study aims to ascertain the use of lip prints and palatal rugae pattern in identification and sex differentiation. Materials and Methods: A total of 100 subjects, 50 males and 50 females were selected from among the students of Subharti Dental College, Meerut. The materials used to record lip prints were lipstick, bond paper, cellophane tape, a brush for applying the lipstick, and a magnifying lens. To study palatal rugae, alginate impressions were taken and the dental casts analyzed for their various patterns. Results: Statistical analysis (applying Z-test for proportion showed significant difference for type I, I?, IV and V lip patterns (P < 0.05 in males and females, while no significant difference was observed for the same in the palatal rugae patterns (P > 0.05. Conclusion: This study not only showed that palatal rugae and lip prints are unique to an individual, but also that lip prints is more reliable for recognition of the sex of an individual.

14. Human reliability analysis of control room operators

Energy Technology Data Exchange (ETDEWEB)

Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)

2005-07-01

Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)

15. A comparative study on the reliability of open cluster parameters

CERN Document Server

Netopil, M; Carraro, G

2015-01-01

Open clusters are known as excellent tracers of the structure and chemical evolution of the Galactic disk, however, the accuracy and reliability of open cluster parameters is poorly known. In recent years, several studies aimed to present homogeneous open cluster parameter compilations, which are based on some different approaches and photometric data. These catalogues are excellent sources to facilitate testing of the actual accuracy of open cluster parameters. We compare seven cluster parameter compilations statistically and with an external sample, which comprises the mean results of individual studies. Furthermore, we selected the objects IC 4651, NGC 2158, NGC 2383, NGC 2489, NGC 2627, NGC 6603, and Trumpler 14, with the main aim to highlight differences in the fitting solutions. We derived correction terms for each cluster parameter, using the external calibration sample. Most results by the compilations are reasonable scaled, but there are trends or constant offsets of different degree. We also identif...

16. Advances in human reliability analysis in Mexico

International Nuclear Information System (INIS)

Human Reliability Analysis (HRA) is a very important part of Probabilistic Risk Analysis (PRA), and constant work is dedicated to improving methods, guidance and data in order to approach realism in the results as well as looking for ways to use these to reduce accident frequency at plants. Further, in order to advance in these areas, several HRA studies are being performed globally. Mexico has participated in the International HRA Empirical study with the objective of -benchmarking- HRA methods by comparing HRA predictions to actual crew performance in a simulator, as well as in the empirical study on a US nuclear power plant currently in progress. The focus of the first study was the development of an understanding of how methods are applied by various analysts, and characterize the methods for their capability to guide the analysts to identify potential human failures, and associated causes and performance shaping factors. The HRA benchmarking study has been performed by using the Halden simulator, 14 European crews, and 15 HRA equipment s (NRC, EPRI, and foreign HRA equipment s using different HRA methods). This effort in Mexico is reflected through the work being performed on updating the Laguna Verde PRA to comply with the ASME PRA standard. In order to be considered an HRA with technical adequacy, that is, be considered as a capability category II, for risk-informed applications, the methodology used for the HRA in the original PRA is not considered sufficiently detailed, and the methodology had to upgraded. The HCR/CBDT/THERP method was chosen, since this is used in many nuclear plants with similar design. The HRA update includes identification and evaluation of human errors that can occur during testing and maintenance, as well as human errors that can occur during an accident using the Emergency Operating Procedures. The review of procedures for maintenance, surveillance and operation is a necessary step in HRA and provides insight into the possible mechanisms for human error at the plant. (Author)

17. Sensitivity analysis in a structural reliability context

International Nuclear Information System (INIS)

This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

18. Bridging Resilience Engineering and Human Reliability Analysis

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring

2010-06-01

There has been strong interest in the new and emerging field called resilience engineering. This field has been quick to align itself with many existing safety disciplines, but it has also distanced itself from the field of human reliability analysis. To date, the discussion has been somewhat one-sided, with much discussion about the new insights afforded by resilience engineering. This paper presents an attempt to address resilience engineering from the perspective of human reliability analysis (HRA). It is argued that HRA shares much in common with resilience engineering and that, in fact, it can help strengthen nascent ideas in resilience engineering. This paper seeks to clarify and ultimately refute the arguments that have served to divide HRA and resilience engineering.

19. ZERBERUS - the code for reliability analysis of crack containing structures

International Nuclear Information System (INIS)

Brief description of the First- and Second Order Reliability Methods, being the theoretical background of the code, is given. The code structure is described in detail, with special emphasis to the new application fields. The numerical example investigates failure probability of steam generator tubing affected by stress corrosion cracking. The changes necessary to accommodate this analysis within the ZERBERUS code are explained. Analysis results are compared to different Monte Carlo techniques. (orig./HP)

20. Probability techniques for reliability analysis of composite materials

Science.gov (United States)

Wetherhold, Robert C.; Ucci, Anthony M.

1994-01-01

Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

1. Infusing Reliability Techniques into Software Safety Analysis

Science.gov (United States)

Shi, Ying

2015-01-01

Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.

2. The quantitative failure of human reliability analysis

Energy Technology Data Exchange (ETDEWEB)

Bennett, C.T.

1995-07-01

This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.

3. Reliability Analysis of Brittle, Thin Walled Structures

Energy Technology Data Exchange (ETDEWEB)

Jonathan A Salem and Lynn Powers

2007-02-09

One emerging application for ceramics is diesel particulate filters being used order to meet EPA regulations going into effect in 2008. Diesel particulates are known to be carcinogenic and thus need to be minimized. Current systems use filters made from ceramics such as mullite and corderite. The filters are brittle and must operate at very high temperatures during a burn out cycle used to remove the soot buildup. Thus the filters are subjected to thermal shock stresses and life time reliability analysis is required. NASA GRC has developed reliability based design methods and test methods for such applications, such as CARES/Life and American Society for Testing and Materials (ASTM) C1499 “Standard Test Method for Equibiaxial Strength of Ceramics.”

4. Subset simulation for structural reliability sensitivity analysis

International Nuclear Information System (INIS)

Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

5. Reliability analysis of containment isolation systems

Energy Technology Data Exchange (ETDEWEB)

Pelto, P.J.; Ames, K.R.; Gallucci, R.H.

1985-06-01

This report summarizes the results of the Reliability Analysis of Containment Isolation System Project. Work was performed in five basic areas: design review, operating experience review, related research review, generic analysis and plant specific analysis. Licensee Event Reports (LERs) and Integrated Leak Rate Test (ILRT) reports provided the major sources of containment performance information used in this study. Data extracted from LERs were assembled into a computer data base. Qualitative and quantitative information developed for containment performance under normal operating conditions and design basis accidents indicate that there is room for improvement. A rough estimate of overall containment unavailability for relatively small leaks which violate plant technical specifications is 0.3. An estimate of containment unavailability due to large leakage events is in the range of 0.001 to 0.01. These estimates are dependent on several assumptions (particularly on event duration times) which are documented in the report.

6. Reliability analysis of containment isolation systems

International Nuclear Information System (INIS)

This report summarizes the results of the Reliability Analysis of Containment Isolation System Project. Work was performed in five basic areas: design review, operating experience review, related research review, generic analysis and plant specific analysis. Licensee Event Reports (LERs) and Integrated Leak Rate Test (ILRT) reports provided the major sources of containment performance information used in this study. Data extracted from LERs were assembled into a computer data base. Qualitative and quantitative information developed for containment performance under normal operating conditions and design basis accidents indicate that there is room for improvement. A rough estimate of overall containment unavailability for relatively small leaks which violate plant technical specifications is 0.3. An estimate of containment unavailability due to large leakage events is in the range of 0.001 to 0.01. These estimates are dependent on several assumptions (particularly on event duration times) which are documented in the report

7. Reliability Analysis Techniques for Communication Networks in Nuclear Power Plant

International Nuclear Information System (INIS)

The objectives of this project is to investigate and study existing reliability analysis techniques for communication networks in order to develop reliability analysis models for nuclear power plant's safety-critical networks. It is necessary to make a comprehensive survey of current methodologies for communication network reliability. Major outputs of this study are design characteristics of safety-critical communication networks, efficient algorithms for quantifying reliability of communication networks, and preliminary models for assessing reliability of safety-critical communication networks

8. Enertech High Reliability prototype vibration analysis

Science.gov (United States)

Sexton, J. H.

Modal analysis techniques were experimentally applied to study the dynamic interaction between a wind turbine generator and its support tower. Details of the techniques applied and corresponding results are discussed. Results of vibration tests indicate the Enertech High-Reliability wind turbine generator (WTG/support structure) second mode bending was 13.2 Hz, while the blade's first mode bending frequencies were 12.4 Hz for blade two and 14.6 Hz for blade one. Significant WTG/tower response was observed and recorded during WTG operation which was traced to this system response characteristic.

9. Bayesian networks with applications in reliability analysis

OpenAIRE

Langseth, Helge

2002-01-01

A common goal of the papers in this thesis is to propose, formalize and exemplify the use of Bayesian networks as a modelling tool in reliability analysis. The papers span work in which Bayesian networks are merely used as a modelling tool (Paper I), work where models are specially designed to utilize the inference algorithms of Bayesian networks (Paper II and Paper III), and work where the focus has been on extending the applicability of Bayesian networks to very large domains (Paper IV and ...

10. A comparative study on the reliability of open cluster parameters

Science.gov (United States)

Netopil, M.; Paunzen, E.; Carraro, G.

2015-10-01

Context. Open clusters are known as excellent tracers of the structure and chemical evolution of the Galactic disk, however, the accuracy and reliability of open cluster parameters is poorly known. Aims: In recent years, several studies aimed to present homogeneous open cluster parameter compilations, which are based on some different approaches and photometric data. These catalogues are excellent sources to facilitate testing of the actual accuracy of open cluster parameters. Methods: We compare seven cluster parameter compilations statistically and with an external sample, which comprises the mean results of individual studies. Furthermore, we selected the objects IC 4651, NGC 2158, NGC 2383, NGC 2489, NGC 2627, NGC 6603, and Trumpler 14, with the main aim to highlight differences in the fitting solutions. Results: We derived correction terms for each cluster parameter, using the external calibration sample. Most results by the compilations are reasonable scaled, but there are trends or constant offsets of different degree. We also identified one data set, which appears too erroneous to allow adjustments. After the correction, the mean intrinsic errors amount to about 0.2 dex for the age, 0.08 mag for the reddening, and 0.35 mag for the distance modulus. However, there is no study that characterises the cluster morphologies of all test cases in a correct and consistent manner. Furthermore, we found that the largest compilations probably include at least 20 percent of problematic objects, for which the parameters differ significantly. These could be among others doubtful or unlikely open clusters that do not facilitate an unambiguous fitting solution.

11. Structural reliability analysis based on the cokriging technique

International Nuclear Information System (INIS)

Approximation methods are widely used in structural reliability analysis because they are simple to create and provide explicit functional relationships between the responses and variables in stead of the implicit limit state function. Recently, the kriging method which is a semi-parameter interpolation technique that can be used for deterministic optimization and structural reliability has gained popularity. However, to fully exploit the kriging method, especially in high-dimensional problems, a large number of sample points should be generated to fill the design space and this can be very expensive and even impractical in practical engineering analysis. Therefore, in this paper, a new method-the cokriging method, which is an extension of kriging, is proposed to calculate the structural reliability. cokriging approximation incorporates secondary information such as the values of the gradients of the function being approximated. This paper explores the use of the cokriging method for structural reliability problems by comparing it with the Kriging method based on some numerical examples. The results indicate that the cokriging procedure described in this work can generate approximation models to improve on the accuracy and efficiency for structural reliability problems and is a viable alternative to the kriging.

12. Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

International Nuclear Information System (INIS)

The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan human reliability analysis (IJS-HRA) and standardized plant analysis risk human reliability analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance

13. Comparison of methods for dependency determination between human failure events within human reliability analysis

International Nuclear Information System (INIS)

The Human Reliability Analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease of subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan - Human Reliability Analysis (IJS-HRA) and Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance. (author)

14. RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM

Science.gov (United States)

Bowerman, P. N.

1994-01-01

RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.

15. CRAX/Cassandra Reliability Analysis Software

Energy Technology Data Exchange (ETDEWEB)

Robinson, D.

1999-02-10

Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components is to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.

16. Integrated Reliability and Risk Analysis System (IRRAS)

International Nuclear Information System (INIS)

The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance

17. Advancing Usability Evaluation through Human Reliability Analysis

International Nuclear Information System (INIS)

This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues

18. SURE reliability analysis: Program and mathematics

Science.gov (United States)

Butler, Ricky W.; White, Allan L.

1988-01-01

The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

19. Integrated Reliability and Risk Analysis System (IRRAS)

Energy Technology Data Exchange (ETDEWEB)

Russell, K D; McKay, M K; Sattison, M.B. Skinner, N.L.; Wood, S T [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rasmuson, D M [Nuclear Regulatory Commission, Washington, DC (United States)

1992-01-01

The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance.

20. Advancing Usability Evaluation through Human Reliability Analysis

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring; David I. Gertman

2005-07-01

This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.

1. Estimating Reliability of Power Factor Correction Circuits: A Comparative Study

Directory of Open Access Journals (Sweden)

P.Srinivas

2015-04-01

Full Text Available Reliability plays an important role in power supplies, as every power supply is the very heart of every electronics equipment. For other electronic equipment, a certain failure mode, at least for a part of the total system, can often be tolerated without serious (critical after effects. However, for the power supply no such condition can be accepted, since very high demands on the reliability must be achieved. At higher power levels, the CCM boost converter is preferred topology for implementation a front end with PFC. As a result significant efforts have been made to improve the performance of high boost converter. This paper is one the effort for improving the performance of the converter from the reliability point of view. In this paper a boost power factor correction converter is simulated with single switch and interleaving technique in CCM, DCM and CRM modes under different output power ratings and the reliability. Results of the converter are explored from reliability point of view.

2. Reliability Analysis of A System Using Intuitionstic Fuzzy Sets

OpenAIRE

Sharma, M. K.; Vintesh Sharma; Rajesh Dangwal

2012-01-01

In General fuzzy sets are used to analyze the system Reliability. Present paper attempts to review the fuzzy/possibility tools when dealing with reliability of series-parallel network systems. Various issues of reasoning-based approaches in this framework are reviewed, discussed and compared with the standard approaches of reliability. To analyze the fuzzy system reliability, the reliability of each component of the system is considered as a trapezoidal intuitionstic fuzzy number. Trapezoidal...

3. Comparative assessment of selected PWR auxiliary feedwater system reliability analyses

International Nuclear Information System (INIS)

This paper presents a sample of results obtained in reviewing utility submittals of Auxiliary Feedwater System reliability studies. These results are then used to illustrate a few general points regarding such studies. The submittals and reviews for operating license applications are quite significant in that they represent an application of probabilistic risk assessment techniques in the licensing process

4. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring; David I. Gertman; Katya Le Blanc

2011-09-01

This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

5. Human Reliability Analysis for Small Modular Reactors

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring; David I. Gertman

2012-06-01

Because no human reliability analysis (HRA) method was specifically developed for small modular reactors (SMRs), the application of any current HRA method to SMRs represents tradeoffs. A first- generation HRA method like THERP provides clearly defined activity types, but these activity types do not map to the human-system interface or concept of operations confronting SMR operators. A second- generation HRA method like ATHEANA is flexible enough to be used for SMR applications, but there is currently insufficient guidance for the analyst, requiring considerably more first-of-a-kind analyses and extensive SMR expertise in order to complete a quality HRA. Although no current HRA method is optimized to SMRs, it is possible to use existing HRA methods to identify errors, incorporate them as human failure events in the probabilistic risk assessment (PRA), and quantify them. In this paper, we provided preliminary guidance to assist the human reliability analyst and reviewer in understanding how to apply current HRA methods to the domain of SMRs. While it is possible to perform a satisfactory HRA using existing HRA methods, ultimately it is desirable to formally incorporate SMR considerations into the methods. This may require the development of new HRA methods. More practicably, existing methods need to be adapted to incorporate SMRs. Such adaptations may take the form of guidance on the complex mapping between conventional light water reactors and small modular reactors. While many behaviors and activities are shared between current plants and SMRs, the methods must adapt if they are to perform a valid and accurate analysis of plant personnel performance in SMRs.

6. Human reliability analysis in Third Qinshan nuclear power plant

International Nuclear Information System (INIS)

Human reliability analysis (HRA) is an important component of probabilistic safety assessment (PSA). The design HRA was conducted by AECL and the technique was oversimplified. In order to make HRA represent the operational state of Third Qinshan Nuclear Power Plant more realistically, HRA was re-analyzed, to which dependence between events was added. On the basis of a comparison on internationally prevailing HRA techniques, a standardized THERP+HCR technique was adopted. Compared with AECL results, the updated is basically consistent with AECL analysis, while rationality and accuracy are obviously improved and results are more truthful. (authors)

7. Reliability test and failure analysis of high power LED packages

International Nuclear Information System (INIS)

A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 0C/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing. (semiconductor devices)

8. Safety and reliability assessment by using dynamic reliability analysis method

International Nuclear Information System (INIS)

DYLAM and its related applications are reviewed in detail and found to have many favorable characteristics. Concerning human factor analysis, the study demonstrates that DYLAM methodology represents and appropriate tool to study man-machine behavior provided that DYLAM is used to model machine behavior and an appropriate operator interface human factor model is included. A hybrid model which is a synthesis of the DYLAM model, a system thermodynamic simulation model and a neutral network predicative model, is implemented and used to analyze dynamically the CANDU pressurizer system

9. Probabilistic risk assessment course documentation. Volume 3. System reliability and analysis techniques, Session A - reliability

International Nuclear Information System (INIS)

This course in System Reliability and Analysis Techniques focuses on the quantitative estimation of reliability at the systems level. Various methods are reviewed, but the structure provided by the fault tree method is used as the basis for system reliability estimates. The principles of fault tree analysis are briefly reviewed. Contributors to system unreliability and unavailability are reviewed, models are given for quantitative evaluation, and the requirements for both generic and plant-specific data are discussed. Also covered are issues of quantifying component faults that relate to the systems context in which the components are embedded. All reliability terms are carefully defined. 44 figs., 22 tabs

10. Travel time Reliability Analysis Using Entropy

OpenAIRE

Neveen Shlayan; Vidhya Kumaresan; Pushkin Kachroo; Brian Hoeft

2013-01-01

Travel time reliability is a measure that is commonly extracted from travel time measurements. It has served as a vital indicator of the transportation system’s performance making the concept of obtaining reliability from travel time data very useful. Travel time is a good indicator of the performance of a particular highway segment. However, it does not convey all aspects of the overall performance of the transportation system. Travel Time Reliability is defined as the consistency of traffic...

11. Reliability Analysis of an Offshore Structure

DEFF Research Database (Denmark)

Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Rackwitz, R.; Bryla, P.

1992-01-01

A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittie fradure and crack through the tubular roerober walls. The stochastic modeiling is described. The hot spot stress speetral moments as fundion of the stochastic variables are desenbed using spline fundion response surfaces. A Laplace integral expansion is used to estimate the time variant reliability. Parameter studies are performed for the reliability estimat...

12. Meta-Analysis of Scale Reliability Using Latent Variable Modeling

Science.gov (United States)

Raykov, Tenko; Marcoulides, George A.

2013-01-01

A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…

13. Mathematical Methods in Survival Analysis, Reliability and Quality of Life

CERN Document Server

Huber, Catherine; Mesbah, Mounir

2008-01-01

Reliability and survival analysis are important applications of stochastic mathematics (probability, statistics and stochastic processes) that are usually covered separately in spite of the similarity of the involved mathematical theory. This title aims to redress this situation: it includes 21 chapters divided into four parts: Survival analysis, Reliability, Quality of life, and Related topics. Many of these chapters were presented at the European Seminar on Mathematical Methods for Survival Analysis, Reliability and Quality of Life in 2006.

14. Mechanical Properties for Reliability Analysis of Structures in Glassy Carbon

CERN Document Server

Garion, Cédric

2014-01-01

Despite its good physical properties, the glassy carbon material is not widely used, especially for structural applications. Nevertheless, its transparency to particles and temperature resistance are interesting properties for the applications to vacuum chambers and components in high energy physics. For example, it has been proposed for fast shutter valve in particle accelerator [1] [2]. The mechanical properties have to be carefully determined to assess the reliability of structures in such a material. In this paper, mechanical tests have been carried out to determine the elastic parameters, the strength and toughness on commercial grades. A statistical approach, based on the Weibull’s distribution, is used to characterize the material both in tension and compression. The results are compared to the literature and the difference of properties for these two loading cases is shown. Based on a Finite Element analysis, a statistical approach is applied to define the reliability of a structural component in gl...

15. Issues in benchmarking human reliability analysis methods : a literature review.

Energy Technology Data Exchange (ETDEWEB)

Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

2008-04-01

There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

16. Failure analysis and reliability improvement of small turbine engine blades

Science.gov (United States)

Shee, H. K.; Chen, H. Y.; Yang, T. W.

1992-07-01

Analysis and testing procedures for small turbine engines are presented which are aimed at verifying the critical failure modes and improving the performance, reliability, and safety of operating engine blades. These procedures include metallographic examination; chemical ingredient, vibration, modal, and stress analyses; fatigue life prediction; and modal testing with and without coating. It is demonstrated that, for small turbine engine under consideration, the most probable failure mode is the fatigue fracture rather than the creep fracture. An approach based on the reduction of the number of stators from 17 to 14 is found to be the most beneficial for improving the fatigue performance and reliability of engine blades as compared to the surface coating and high strength material approaches. This approach removes vibrational frequencies of the turbine engine from the operating frequencies, thus significantly reducing the vibrational level of engine blades.

17. Optimization Based Efficiencies in First Order Reliability Analysis

Science.gov (United States)

2003-01-01

This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

18. Reliability analysis for components using Fuzzy Membership functions

OpenAIRE

Chavan P.R.; Bajaj V. H.

2009-01-01

This paper presents reliability analysis for components using fuzzy operations. Probabilityassumption and Fuzzy State assumption (PROFUST) reliability theory is used to find out the reliability ofeach system component. Here, the reliability of each system component is presented by trapezoidal fuzzynumber. The proposed method is used to simplify fuzzy arithmetic operations of fuzzy numbers. Finally, anumerical example is illustrated to verify the efficiency of used operations in the functions.

19. PC-PRAISE, BWR Piping Reliability Analysis

International Nuclear Information System (INIS)

1 - Description of program or function: PC-PRAISE is a probabilistic fracture mechanics computer code developed for IBM or IBM compatible personal computers to estimate probabilities of leak and break in nuclear power plant cooling piping. 2 - Method of solution: PC-PRAISE considers the initiation and/or growth of crack-like defects in piping weldments. The initiation analyses are based on the results of laboratory studies and field observations in austenitic piping material operating under boiling water reactor conditions. The considerable scatter in such results is quantified and incorporated into a probabilistic model. The crack growth analysis is based on (deterministic) fracture mechanics principles, in which some of the inputs (such as initial crack size) are considered to be random variables. Monte Carlo simulation, with stratified sampling on initial crack size, is used to generate weldment reliability results. 3 - Restrictions on the complexity of the problem: There is essentially no limitation with PC-PRAISE but for large number of replications used in the Monte Carlo simulation scheme, computation time may become prohibitive

20. Individual Differences in Human Reliability Analysis

Energy Technology Data Exchange (ETDEWEB)

Jeffrey C. Joe; Ronald L. Boring

2014-06-01

While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research has shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.

1. Space Mission Human Reliability Analysis (HRA) Project

Science.gov (United States)

Boyer, Roger

2014-01-01

The purpose of the Space Mission Human Reliability Analysis (HRA) Project is to extend current ground-based HRA risk prediction techniques to a long-duration, space-based tool. Ground-based HRA methodology has been shown to be a reasonable tool for short-duration space missions, such as Space Shuttle and lunar fly-bys. However, longer-duration deep-space missions, such as asteroid and Mars missions, will require the crew to be in space for as long as 400 to 900 day missions with periods of extended autonomy and self-sufficiency. Current indications show higher risk due to fatigue, physiological effects due to extended low gravity environments, and others, may impact HRA predictions. For this project, Safety & Mission Assurance (S&MA) will work with Human Health & Performance (HH&P) to establish what is currently used to assess human reliabiilty for human space programs, identify human performance factors that may be sensitive to long duration space flight, collect available historical data, and update current tools to account for performance shaping factors believed to be important to such missions. This effort will also contribute data to the Human Performance Data Repository and influence the Space Human Factors Engineering research risks and gaps (part of the HRP Program). An accurate risk predictor mitigates Loss of Crew (LOC) and Loss of Mission (LOM).The end result will be an updated HRA model that can effectively predict risk on long-duration missions.

2. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

Science.gov (United States)

Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

2005-01-01

This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

3. Probabilistic safety analysis and human reliability analysis. Proceedings. Working material

International Nuclear Information System (INIS)

An international meeting on Probabilistic Safety Assessment (PSA) and Human Reliability Analysis (HRA) was jointly organized by Electricite de France - Research and Development (EDF DER) and SRI International in co-ordination with the International Atomic Energy Agency. The meeting was held in Paris 21-23 November 1994. A group of international and French specialists in PSA and HRA participated at the meeting and discussed the state of the art and current trends in the following six topics: PSA Methodology; PSA Applications; From PSA to Dependability; Incident Analysis; Safety Indicators; Human Reliability. For each topic a background paper was prepared by EDF/DER and reviewed by the international group of specialists who attended the meeting. The results of this meeting provide a comprehensive overview of the most important questions related to the readiness of PSA for specific uses and areas where further research and development is required. Refs, figs, tabs

4. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

Energy Technology Data Exchange (ETDEWEB)

Ronald Laurids Boring

2010-11-01

This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

5. System reliability analysis techniques and their relationship to mechanical/structural reliability problems

International Nuclear Information System (INIS)

The paper gives a brief review of the techniques used in the reliability analysis of functional systems. It also considers some of the aspects arising in applying similar techniques to the reliability analysis of mechanical and structural components. The paper concludes that further data acquisition is of prime importance. Additionally, it is suggested that it may be worthwhile to pay increased attention to the on-line monitoring of deterioration in mechanical and structural elements. (orig.)

6. A methodology to incorporate organizational factors into human reliability analysis

International Nuclear Information System (INIS)

A new holistic methodology for Human Reliability Analysis (HRA) is proposed to model the effects of the organizational factors on the human reliability. Firstly, a conceptual framework is built, which is used to analyze the causal relationships between the organizational factors and human reliability. Then, the inference model for Human Reliability Analysis is built by combining the conceptual framework with Bayesian networks, which is used to execute the causal inference and diagnostic inference of human reliability. Finally, a case example is presented to demonstrate the specific application of the proposed methodology. The results show that the proposed methodology of combining the conceptual model with Bayesian Networks can not only easily model the causal relationship between organizational factors and human reliability, but in a given context, people can quantitatively measure the human operational reliability, and identify the most likely root causes or the prioritization of root causes caused human error. (authors)

7. Affordable reliability engineering life-cycle cost analysis for sustainability & logistical support

CERN Document Server

Wessels, William R

2015-01-01

How Can Reliability Analysis Impact Your Company's Bottom Line?While reliability investigations can be expensive, they can also add value to a product that far exceeds its cost. Affordable Reliability Engineering: Life-Cycle Cost Analysis for Sustainability & Logistical Support shows readers how to achieve the best cost for design development testing and evaluation and compare options for minimizing costs while keeping reliability above specifications. The text is based on the premise that all system sustainment costs result from part failure. It examines part failure in the design and sustain

8. Reliability analysis of RC containment structures under combined loads

International Nuclear Information System (INIS)

This paper discusses a reliability analysis method and load combination design criteria for reinforced concrete containment structures under combined loads. The probability based reliability analysis method is briefly described. For load combination design criteria, derivations of the load factors for accidental pressure due to a design basis accident and safe shutdown earthquake (SSE) for three target limit state probabilities are presented

9. Reliability analysis of pipe whip impacts

International Nuclear Information System (INIS)

A probabilistic analysis of a group distribution header (GDH) guillotine break and the damage resulting from the failed GDH impacting against a neighbouring wall was carried out for the Ignalita RBMK-1500 reactor. The NEPTUNE software system was used for the deterministic transient analysis of a GDH guillotine break. Many deterministic analyses were performed using different values of the random variables that were specified by ProFES software. All the deterministic results were transferred to the ProFES system, which then performed probabilistic analyses of piping failure and wall damage. The Monte Carlo Simulation (MCS) method was used to study the sensitivity of the response variables and the effect of uncertainties of material properties and geometry parameters to the probability of limit states. The First Order Reliability Method (FORM) was used to study the probability of failure of the impacted-wall and the support-wall. The Response Surface (RS/MCS) method was used in order to express failure probability as function and to investigate the dependence between impact load and failure probability. The results of the probability analyses for a whipping GDH impacting onto an adjacent wall show that: (i) there is a 0.982 probability that after a GDH guillotine break contact between GDH and wall will occur; (ii) there is a probability of 0.013 that the ultimate tensile strength of concrete at the impact location will be reached, and a through-crack may open; (iii) there is a probability of 0.0126 that the ultimate compressive strength of concrete at the GDH support location will be reached, and the concrete may fail; (iv) at the impact location in the adjacent wall, there is a probability of 0.327 that the ultimate tensile strength of the rebars in the first layer will be reached and the rebars will fail; (v) at the GDH support location, there is a probability of 0.11 that the ultimate stress of the rebars in the first layer will be reached and the rebars will fail. It can be concluded that after a GDH guillotine break, the GDH reinforced concrete support-wall and the impacted wall will develop a through-crack or crush with a probability about 0.013. Only the first layer of rebars, however, will fail in either the impacted-wall or the support-wall with probabilities of 0.327 and 0.11, respectively

10. Application of Support Vector Machine to Reliability Analysis of Engine Systems

OpenAIRE

Zhang Xinfeng; Zhao Yan

2013-01-01

Reliability analysis plays a very important role for assessing the performance and making maintenance plans of engine systems. This research presents a comparative study of the predictive performances of support vector machines (SVM) , least square support vector machine (LSSVM) and neural network time series models for forecasting failures and reliability in engine systems. Further, the reliability indexes of engine systems are computed by the weibull probability paper programmed with Matlab...

11. POSSIBILITY AND EVIDENCE-BASED RELIABILITY ANALYSIS AND DESIGN OPTIMIZATION

Directory of Open Access Journals (Sweden)

Hong-Zhong Huang

2013-01-01

Full Text Available Engineering design under uncertainty has gained considerable attention in recent years. A great multitude of new design optimization methodologies and reliability analysis approaches are put forth with the aim of accommodating various uncertainties. Uncertainties in practical engineering applications are commonly classified into two categories, i.e., aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty arises because of unpredictable variation in the performance and processes of systems, it is irreducible even adding more data or knowledge. On the other hand, epistemic uncertainty stems from lack of knowledge of the system due to limited data, measurement limitations, or simplified approximations in modeling system behavior and it can be reduced by obtaining more data or knowledge. More specifically, aleatory uncertainty is naturally represented by a statistical distribution and its associated parameters can be characterized by sufficient data. If, however, the data is limited and can be quantified in a statistical sense, epistemic uncertainty can be considered as an alternative tool in such a situation. Of the several optional treatments for epistemic uncertainty, possibility theory and evidence theory have proved to be the most computationally efficient and stable for reliability analysis and engineering design optimization. This study first attempts to provide a better understanding of uncertainty in engineering design by giving a comprehensive overview of its classifications, theories and design considerations. Then a review is conducted of general topics such as the foundations and applications of possibility theory and evidence theory. This overview includes the most recent results from theoretical research, computational developments and performance improvement of possibility theory and evidence theory with an emphasis on revealing the capability and characteristics of quantifying uncertainty from different perspectives. Possibility and evidence theory-based reliability methods have many advantages for practical engineering when compared with traditional probability-based reliability methods. They can work well under limited data while the latter need large amounts of information, more than possible in engineering practice due to aleatory and epistemic uncertainties. The possible directions for future work are summarized.

12. Reliability demonstration test planning using bayesian analysis

International Nuclear Information System (INIS)

In Nuclear Power Plants, the reliability of all the safety systems is very critical from the safety viewpoint and it is very essential that the required reliability requirements be met while satisfying the design constraints. From practical experience, it is found that the reliability of complex systems such as Safety Rod Drive Mechanism is of the order of 10-4 with an uncertainty factor of 10. To demonstrate the reliability of such systems is prohibitive in terms of cost and time as the number of tests needed is very large. The purpose of this paper is to develop a Bayesian reliability demonstrating testing procedure for exponentially distributed failure times with gamma prior distribution on the failure rate which can be easily and effectively used to demonstrate component/subsystem/system reliability conformance to stated requirements. The important questions addressed in this paper are: With zero failures, how long one should perform the tests and how many components are required to conclude with a given degree of confidence, that the component under test, meets the reliability requirement. The procedure is explained with an example. This procedure can also be extended to demonstrate with more number of failures. The approach presented is applicable for deriving test plans for demonstrating component failure rates of nuclear power plants, as the failure data for similar components are becoming available in existing plants elsewhere. The advantages of this procedure are the criterion upon which the procedure is based is simple and pertinent, the fitting of the prior distribution is an integral part of the procedure and is based on the use of information regarding two percentiles of this distribution and finally, the procedure is straightforward and easy to apply in practice. (author)

13. Reliability Analysis of Uniaxially Ground Brittle Materials

Science.gov (United States)

Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.

1995-01-01

The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.

14. Development of Reliability Analysis Toolkit for Analysing Plant Maintenance Data

Directory of Open Access Journals (Sweden)

Hussin Hilmi

2014-07-01

Full Text Available Plant failure and maintenance data can be found in abundance, however, their utilization as a basis for improvement action is not fully optimized. This happens because many reliability analyses based on plant data are tedious and time consuming due to non-standardized nature of the data being recorded. To overcome this issue, this study aims to develop a computer based reliability analysis toolkit to facilitate proper analysis of plant data. The toolkit can be used to perform both exploratory and inferential analysis. The developed toolkit has been demonstrated capable of assisting data gathering and analysis as well producing estimation of reliability measures.

15. Reliability analysis of redundant disk arrays ?????? ??????????? ?????????? ?????????? ???????? ????????

Directory of Open Access Journals (Sweden)

?. ?. ???????

2013-07-01

Full Text Available Redundant disk arrays for fault-tolerant data storages and reliability models of repairable systems are discussed. Simplified formulas for mean time to data loss (MTTDL assessment with taking into consideration fault and repair specificity and calculation examples are also provided.??????????????? ??????????? ?????????? ???????? ???????, ?????????????? ???????????????? ???????? ??????, ? ????? ?????? ?????????? ????????????????? ?????? ? ??????? ?????????? ?????? ??? ??????? ???????? ??????? ????????? ?? ?????????? ????????? ??????? ? ??????? ?????? ? ?????? ????????? ??????? ? ?????????????? ?????????? ?? ????????? ?????? ? ???????.

16. Bypassing BDD Construction for Reliability Analysis

DEFF Research Database (Denmark)

Williams, Poul Frederick; Nikolskaia, Macha; Rauzy, Antoine

2000-01-01

In this note, we propose a Boolean Expression Diagram (BED)-based algorithm to compute the minimal p-cuts of boolean reliability models such as fault trees. BEDs make it possible to bypass the Binary Decision Diagram (BDD) construction, which is the main cost of fault tree assessment.

17. Digital Processor Module Reliability Analysis of Nuclear Power Plant

International Nuclear Information System (INIS)

The system used in plant, military equipment, satellite, etc. consists of many electronic parts as control module, which requires relatively high reliability than other commercial electronic products. Specially, Nuclear power plant related to the radiation safety requires high safety and reliability, so most parts apply to Military-Standard level. Reliability prediction method provides the rational basis of system designs and also provides the safety significance of system operations. Thus various reliability prediction tools have been developed in recent decades, among of them, the MI-HDBK-217 method has been widely used as a powerful tool for the prediction. In this work, It is explained that reliability analysis work for Digital Processor Module (DPM, control module of SMART) is performed by Parts Stress Method based on MIL-HDBK-217F NOTICE2. We are using the Relex 7.6 of Relex software corporation, because reliability analysis process requires enormous part libraries and data for failure rate calculation

18. Reliability analysis of self-actuated shutdown system

International Nuclear Information System (INIS)

An analytical study was performed for the reliability of a self-actuated shutdown system (SASS) under the unprotected loss of flow (ULOF) event in a typical loop-type liquid metal fast breeder reactor (LMFBR) by the use of the response surface Monte Carlo analysis method. Dominant parameters for the SASS, such as Curie point characteristics, subassembly outlet coolant temperature, electromagnetic surface condition, etc., were selected and their probability density functions (PDFs) were determined by the design study information and experimental data. To get the response surface function (RSF) for the maximum coolant temperature, transient analyses of ULOF were performed by utilizing the experimental design method in the determination of analytical cases. Then, the RSF was derived by the multi-variable regression analysis. The unreliability of the SASS was evaluated as a probability that the maximum coolant temperature exceeded an acceptable level, employing the Monte Carlo calculation using the above PDFs and RSF. In this study, sensitivities to the dominant parameter were compared. The dispersion of subassembly outlet coolant temperature near the SASS-was found to be one of the most sensitive parameters. Fault tree analysis was performed using this value for the SASS in order to evaluate the shutdown system reliability. As a result of this study, the effectiveness of the SASS on the reliability improvement in the LMFBR shutdown system was analytically confirmed. This study has been performed as a part of joint research and development projects for DFBR under the sponsorship of the nine Japanese electric power companies, Electric Power Development Company and the Japan Atomic Power Company. (author)

19. Non-intrusive finite element reliability analysis methods

OpenAIRE

Papaioannou, Iason

2014-01-01

This thesis focuses on the modeling of uncertainties in structural systems and on strategies for the reliability assessment of structures analysed by finite element programs. New concepts are introduced for the numerical treatment of spatially varied uncertain quantities through the discretization of the relevant random fields as well as for robust and efficient finite element reliability analysis and updating of the reliability in light of new information. The methods have been implemented i...

20. Strength Reliability Analysis of Turbine Blade Using Surrogate Models

OpenAIRE

Wei Duan; Liqiang An; Zhangqi Wang

2014-01-01

There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element M...

1. Design and Analysis for Reliability of Wireless Sensor Network

Directory of Open Access Journals (Sweden)

Yongxian Song

2012-12-01

Full Text Available Reliability is an important performance indicator of wireless sensor network, to some application fields, which have high demands in terms of reliability, it is particularly important to ensure reliability of network. At present, the reliability research findings of wireless sensor network are much more at home and abroad, but they mainly improve network reliability from the networks topology, reliable protocol and application layer fault correction and so on, and reliability of network is comprehensive considered from hardware and software aspects is much less. This paper adopts bionic hardware to implement bionic reconfigurable of wireless sensor network nodes, so as to the nodes have able to change their structure and behavior autonomously and dynamically, in the cases of the part hardware are failure, and the nodes can realize bionic self-healing. Secondly, Markov state diagram and probability analysis method are adopted to realize solution of functional model for reliability, establish the relationship between reliability and characteristic parameters for sink nodes, analyze sink nodes reliability model, so as to determine the reasonable parameters of the model and ensure reliability of sink nodes.

2. A software tool for advanced reliability and safety analysis

International Nuclear Information System (INIS)

A knowledge based approach to systems safety and reliability analysis to be implemented in an intelligent software bool (STARS: Software Tool for Advanced Reliability and Safety) is presented. The tool will offer intelligent and power ful support in performing qualitative hazard analysis, logic modelling (fault tree, event tree construction) and probabilistic analysis for large and complex systems as found in chemical process plant and nuclear industy. (author)

3. Reliability Analysis Using Fault Tree Analysis: A Review

Directory of Open Access Journals (Sweden)

Ahmed Ali Baig

2013-06-01

Full Text Available This paper reviews the literature published on the recent modifications made in the field of risk assessment using Fault Tree Analysis (FTAin the last decade. This method was developed in 1960’s for the evaluation and estimation of system reliability and safety. In this paper we have presented the general procedure for FTA, its application in various fields and the modifications that have been made through the time to overcome the inadequacies of the method. In the last section some of the future wok is also discussed with a simplified methodology.

4. Reliability analysis of PLC safety equipment

International Nuclear Information System (INIS)

FMEA analysis for Nuclear Safety Grade PLC, failure rate prediction for nuclear safety grade PLC, sensitivity analysis for components failure rate of nuclear safety grade PLC, unavailability analysis support for nuclear safety system

5. Reliability Analysis of an Offshore Structure

DEFF Research Database (Denmark)

Sørensen, John Dalsgaard; Rackwitz, R.; Thoft-Christensen, Palle; Lebas, G.

1992-01-01

For an offshore structure in the North Sea it is assumed that information from measurements and inspections is available. As illustrations measurements of the significant wave height and the marine growth and different inspection and repair results are considered. It is shown how the reliability estimates of the structure can be updated using Bayesian techniques. By minimizing the total expected costs including inspection, repair and failure costs during the lifetime an optimal inspection and re...

6. Some problems with collection, analysis and use of reliability data

International Nuclear Information System (INIS)

Typical problems with the collection, analysis and use of reliability data are discussed. It is argued that the collection of reliability data has to be selective, and that insufficient attention to this selectiveness is responsible for the majority of problems with the collection of data. The collection of reliability data must be carefully planned and undertaken by dedicated, well-trained and well-motivated staff. The reliability data must be analyzed, tested and used as carefully and cautiously, and under the same discipline, as other engineering parameters. (author)

7. Reliability analysis of wind turbines exposed to dynamic loads

DEFF Research Database (Denmark)

SØrensen, John Dalsgaard

2014-01-01

Wind turbines are exposed to highly dynamic loads that cause fatigue and extreme load effects which are subject to significant uncertainties. Further, reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources. Therefore the turbine components should be designed to have sufficient reliability with respect to both extreme and fatigue loads also not be too costly (and safe). This paper presents models for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades, substructure and foundation considering especially fatigue loads. The function of a wind turbine is highly dependent on many electrical and mechanical components as well as a control system also reliability aspects of these components are discussed and it is described how there reliability influences the reliability of the structural components. Illustrative examples are presented considering uncertainty modeling and reliability assessment for structural wind turbine components exposed to extreme loads and fatigue, respectively.

8. Reliability, Validity, Comparability and Practical Utility of Cybercrime-Related Data, Metrics, and Information

Directory of Open Access Journals (Sweden)

Nir Kshetri

2013-02-01

Full Text Available With an increasing pervasiveness, prevalence and severity of cybercrimes, various metrics, measures and statistics have been developed and used to measure various aspects of this phenomenon. Cybercrime-related data, metrics, and information, however, pose important and difficult dilemmas regarding the issues of reliability, validity, comparability and practical utility. While many of the issues of the cybercrime economy are similar to other underground and underworld industries, this economy also has various unique aspects. For one thing, this industry also suffers from a problem partly rooted in the incredibly broad definition of the term “cybercrime”. This article seeks to provide insights and analysis into this phenomenon, which is expected to advance our understanding into cybercrime-related information.

9. Data base on NPP reliability analysis

International Nuclear Information System (INIS)

The paper poses some software requirements for the Local Data Base (LDB) intended to collect, process and analyze data about NPP equipment reliability. Five specific conditions for the software have been formulated and on their basis the system REVELATION, an MC adaptation of PRIME INFORMATION, has been pointed out as more suitable than previously implemented DBASE 3+. The file structure of the LDB based on REVELATION is described. The LDB will be used in the Kozloduy NPP by all operation and management levels. 1 fig, 7 refs

10. PSA applications and piping reliability analysis: where do we stand?

International Nuclear Information System (INIS)

This reviews a recently proposed framework for piping reliability analysis. The framework was developed to promote critical interpretations of operational data on pipe failures, and to support application-specific-parameter estimation

11. Reliability analysis of digital safety systems at nuclear power plants

International Nuclear Information System (INIS)

Reliability analysis of digital reactor protection systems built on the basis of TELEPERM XS is described, and experience gained by the Slovak RELKO company during the past 20 years in this domain is highlighted. (orig.)

12. Reliability analysis of digital I and C systems at KAERI

International Nuclear Information System (INIS)

This paper provides an overview of the ongoing research activities on a reliability analysis of digital instrumentation and control (I and C) systems of nuclear power plants (NPPs) performed by the Korea Atomic Energy Research Institute (KAERI). The research activities include the development of a new safety-critical software reliability analysis method by integrating the advantages of existing software reliability analysis methods, a fault coverage estimation method based on fault injection experiments, and a new human reliability analysis method for computer-based main control rooms (MCRs) based on human performance data from the APR-1400 full-scope simulator. The research results are expected to be used to address various issues such as the licensing issues related to digital I and C probabilistic safety assessment (PSA) for advanced digital-based NPPs. (author)

13. Simulation Approach to Mission Risk and Reliability Analysis Project

Data.gov (United States)

National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

14. Reliability-Based Robustness Analysis for a Croatian Sports Hall

DEFF Research Database (Denmark)

?izmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Raj?i?, Vlatka

2011-01-01

This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system. A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of...

15. Reliability

OpenAIRE

Keller-McNulty, Sallie; Wilson, Alyson; Anderson-Cook, Christine

2007-01-01

This special volume of Statistical Sciences presents some innovative, if not provocative, ideas in the area of reliability, or perhaps more appropriately named, integrated system assessment. In this age of exponential growth in science, engineering and technology, the capability to evaluate the performance, reliability and safety of complex systems presents new challenges. Today's methodology must respond to the ever-increasing demands for such evaluations to provide key inf...

16. The PAWS and STEM reliability analysis programs

Science.gov (United States)

Butler, Ricky W.; Stevenson, Philip H.

1988-01-01

The PAWS and STEM programs are new design/validation tools. These programs provide a flexible, user-friendly, language-based interface for the input of Markov models describing the behavior of fault-tolerant computer systems. These programs produce exact solutions of the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. PAWS uses a Pade approximation as a solution technique; STEM uses a Taylor series as a solution technique. Both programs have the capability to solve numerically stiff models. PAWS and STEM possess complementary properties with regard to their input space; and, an additional strength of these programs is that they accept input compatible with the SURE program. If used in conjunction with SURE, PAWS and STEM provide a powerful suite of programs to analyze the reliability of fault-tolerant computer systems.

17. ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS

Science.gov (United States)

Viterna, L. A.

1994-01-01

The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.

18. The Reliability of Content Analysis of Computer Conference Communication

Science.gov (United States)

Rattleff, Pernille

2007-01-01

The focus of this article is the reliability of content analysis of students' computer conference communication. Content analysis is often used when researching the relationship between learning and the use of information and communications technology in educational settings. A number of studies where content analysis is used and classification…

19. Discrete Event Simulation and Petri net Modeling for Reliability Analysis

Directory of Open Access Journals (Sweden)

2012-05-01

Full Text Available Analytical methods in reliability analysis are useful for studying simple problems. For complex networks with cross-linked (non-series/parallel component configurations, it is difficult to use mathematical reliability analysis. Powerful methods for reliability analysis of such systems have been developed using discrete event simulation. The main drawback of these methods is that they are computer time intensive. In this paper, the main idea behind these methods is further explored and modified in order to reduce the computational loads. The modified approach presented here leads to a great time saving which is very important for reliability analysis of large scale systems. This modified method is then modeled by Petri net, which is a powerful modeling tool. The network reliability modeling technique developed in the paper has two main advantages. First, it can be easily implemented through a systematic and standard approach. Second, the developed model will greatly help solving the reliability analysis problem since it is simple and graphical.

20. BANK RATING. A COMPARATIVE ANALYSIS

Directory of Open Access Journals (Sweden)

Batrancea Ioan

2015-07-01

Full Text Available Banks in Romania offers its customers a wide range of products but which involves both risk taking. Therefore researchers seek to build rating models to help managers of banks to risk of non-recovery of loans and interest. In the following we highlight rating Raiffeisen Bank, BCR-ERSTE Bank and Transilvania Bank, based on the models CAAMPL and Stickney making a comparative analysis of the two rating models.

1. Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

Directory of Open Access Journals (Sweden)

Marko ?epin

2008-07-01

Full Text Available The human reliability analysis (HRA is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute JoÃ…Â¾ef Stefan human reliability analysis (IJS-HRA and standardized plant analysis risk human reliability analysis (SPAR-H. Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance.

2. Reliability Analysis of OMEGA Network and Its Variants

Directory of Open Access Journals (Sweden)

Suman Lata

2012-04-01

Full Text Available The performance of a computer system depends directly on the time required to perform a basic operation and the number of these basic operations that can be performed concurrently. High performance computing systems can be designed using parallel processing. Parallel processing is achieved by using more than one processors or computers together they communicate with each other to solve a givenproblem. MINs provide better way for the communication between different processors or memory modules with less complexity, fast communication, good fault tolerance, high reliability and low cost. Reliability of a system is the probability that it will successfully perform its intended operations for a given time under stated operating conditions. From the reliability analysis it has beenobserved that addition of one stage to Omega networks provide higher reliability in terms of terminal reliability than the addition of two stages in the corresponding network.

3. 14 CFR Appendix N to Part 25 - Fuel Tank Flammability Exposure and Reliability Analysis

Science.gov (United States)

2010-01-01

...Flammability Exposure and Reliability Analysis N Appendix N TO Part...Flammability Exposure and Reliability Analysis N25.1General. ...time period assumed in the reliability analysis (60 flight hours must...

4. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

Science.gov (United States)

2013-07-29

...to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal...approves Modeling, Data, and Analysis (MOD) Reliability Standard MOD- 028-2...to Modeling, Data, and Analysis Reliability Standard, Notice of...

5. Analysis of MAC Protocol for Reliable Broadcast

Directory of Open Access Journals (Sweden)

Savita Savita

2013-02-01

Full Text Available In wireless communication It is important to find a reliable broadcasting protocol that is especially designed for an optimum performance of public-safety and data travelling related applications. Using RSU and OBU, there are four novel ideas presented in this research work, namely choosing the nearest following node as the network probe node, headway-based segmentation, non-uniform segmentation and application adaptive. The integration of these ideas results in a protocol that possesses minimum latency, minimum probability of collision in the acknowledgment messages and unique robustness at different speeds and traffic volumes. Wireless communications are becoming the dominant form of transferring information,and the most active research field. In this dissertation, we will present one of the most applicable forms of Ad-Hoc networks; the Vehicular Ad-Hoc Networks (VANETs. VANET is the technology of building a robust Ad-Hoc network between mobile vehicles and each other, besides, between mobile vehicles and roadside units.

6. Analysis of MAC Protocol for Reliable Broadcast

Directory of Open Access Journals (Sweden)

Savita Savita

2013-03-01

Full Text Available In wireless communication It is important to find a reliable broadcasting protocol that is especially designed for an optimum performance of public-safety and data travelling related applications. Using RSU and OBU, there are four novel ideas presented in this research work, namely choosing the nearest following node as the network probe node, headway-based segmentation, non-uniform segmentation and application adaptive. The integration of these ideas results in a protocol that possesses minimum latency, minimum probability of collision in the acknowledgment messages and unique robustness at different speeds and traffic volumes. Wireless communications are becoming the dominant form of transferring information,and the most active research field. In this dissertation, we will present one of the most applicable forms of Ad-Hoc networks; the Vehicular Ad-Hoc Networks (VANETs. VANET is the technology of building a robust Ad-Hoc network between mobile vehicles and each other, besides, between mobile vehicles and roadside units.

7. Architecture based Reliability Analysis of Continuously Running Concurrent Software Applications

OpenAIRE

Rehab A. El Kharboutly; Reda. Ammar; Swapna S. Gokhale

2008-01-01

The objective of this paper is to describe a reliability and availability analysis methodology for a continuously running, concurrent application. We propose a state space approach to epresent the architecture of a concurrent application, which is then mapped to an irreducible discrete time Markov chain (DTMC) to obtain architectural statistics. We discuss how the application architecture can be extracted from profile data to facilitate the use of our methodology to analyze the reliability of...

8. Register File Reliability Analysis Through Cycle-Accurate Thermal Emulation

OpenAIRE

Ayala Rodrigo, José Luis; Garcia del Valle, Pablo; Atienza Alonso, David

2008-01-01

Continuous transistor scaling due to improvements in CMOS devices and manufacturing technologies is increasing processor power densities and temperatures; thus, creating challenges when trying to maintain manufacturing yield rates and devices which will be reliable throughout their lifetime. New microarchitectures require new reliability-aware design methods that can face these challenges without significantly increasing cost and performance. In this paper we present a complete analysis of re...

9. Reliability analysis framework for computer-assisted medical decision systems

International Nuclear Information System (INIS)

10. The practical approach to the reliability analysis of the software architecture of a complex company control system

Science.gov (United States)

Kovalev, I. V.; Zelenkov, P. V.; Ognerubov, S.

2015-10-01

The practical aspects of the implementation of reliability analysis of the architecture of a complex control system of a company are considered in this article. The comparative analysis for two variants of software architecture using different factors is presented, the relations between the reliability characteristics and the amount of system architecture components and their connections with each other are defined.

11. Recent advances in computational structural reliability analysis methods

Science.gov (United States)

Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

1993-01-01

The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

12. RELIABILITY ANALYSIS OF RING, AGENT AND CLUSTER BASED DISTRIBUTED SYSTEMS

Directory of Open Access Journals (Sweden)

R.SEETHALAKSHMI

2011-08-01

Full Text Available The introduction of pervasive devices and mobile devices has led to immense growth of real time distributed processing. In such context reliability of the computing environment is very important. Reliability is the probability that the devices, links, processes, programs and files work efficiently for the specified period of time and in the specified condition. Distributed systems are available as conventional ring networks, clusters and agent based systems. Reliability of such systems is focused. These networks are heterogeneous and scalable in nature. There are several factors, which are to be considered for reliability estimation. These include the application related factors like algorithms, data-set sizes, memory usage pattern, input-output, communication patterns, task granularity and load-balancing. It also includes the hardware related factors like processor architecture, memory hierarchy, input-output configuration and network. The software related factors concerning reliability are operating systems, compiler, communication protocols, libraries and preprocessor performance. In estimating the reliability of a system, the performance estimation is an important aspect. Reliability analysis is approached using probability.

13. Reliability analysis of cluster-based ad-hoc networks

International Nuclear Information System (INIS)

The mobile ad-hoc wireless network (MAWN) is a new and emerging network scheme that is being employed in a variety of applications. The MAWN varies from traditional networks because it is a self-forming and dynamic network. The MAWN is free of infrastructure and, as such, only the mobile nodes comprise the network. Pairs of nodes communicate either directly or through other nodes. To do so, each node acts, in turn, as a source, destination, and relay of messages. The virtue of a MAWN is the flexibility this provides; however, the challenge for reliability analyses is also brought about by this unique feature. The variability and volatility of the MAWN configuration makes typical reliability methods (e.g. reliability block diagram) inappropriate because no single structure or configuration represents all manifestations of a MAWN. For this reason, new methods are being developed to analyze the reliability of this new networking technology. New published methods adapt to this feature by treating the configuration probabilistically or by inclusion of embedded mobility models. This paper joins both methods together and expands upon these works by modifying the problem formulation to address the reliability analysis of a cluster-based MAWN. The cluster-based MAWN is deployed in applications with constraints on networking resources such as bandwidth and energy. This paper presents the problem's formulation, a discussion of applicable reliability metrics for the MAWN, and illustration of a Monte Carlo simulation method through the analysis of several example networks

14. A comparative analysis of reliability, maintainability and availability for two alternatives of the production submarine systems: ANM and submarine ducts versus BOP and a subsea well testing tree; Analise comparativa da confiabilidade, mantenabilidade e disponibilidade para duas alternativas de sistemas submarino de producao: ANM e dutos submarinos versus BOP e arvore submarina de teste

Energy Technology Data Exchange (ETDEWEB)

Souza, Arlindo Antonio de; Polillo Filho, Adolfo; Santos, Otto Luiz Alcantara [PETROBRAS, Rio de Janeiro, RJ (Brazil)

2004-07-01

This technical article presents a study using the concepts of the Engineering of the Reliability and Risk Analysis with the objective of doing a comparative evaluation of the reliability of two alternative production systems for a marine well: one composed by a wet christmas tree (ANM) producing through underwater ducts (flow lines) and other, usually used in tests of long duration, using a subsea BOP and a subsea well testing tree (AST). The central point of the work was the evaluation of the probability of happening an event considered as critic, denominated 'critical flaw', during the well production life. The work uses one of the procedures and methodologies adopted in the Well Construction Engineering, GERISK, together with four computer applications for data treatment, generation of flaw distribution curves and times of repair, modelling and Monte Carlo simulations. The adopted strategy was the one of starting from the existent report, to assume an interval for the possible real value of the relevant parameters and then to establish the scenarios (more probable, optimist and pessimist). Based on those sceneries, the considered premises, the modelling and the reliabilities obtained for each one of the variables, the simulations have been made. As results, are presented the medium readiness, MTTFF (Mean Time To First Failure), the number of flaws and the expected costs. The work also displays the sensibility analysis in respect to the time of production of the well. (author)

15. The Reliability of Content Analysis of Computer Conference Communication

DEFF Research Database (Denmark)

Rattleff, Pernille

2007-01-01

The focus of this article is the reliability of content analysis of students' computer conference communication. Content analysis is often used when researching the relationship between learning and the use of information and communications technology in educational settings. A number of studies where content analysis is used and classification systems are developed are presented and discussed along with the author's own development and use of a classification system. However, the question of the reliability of content analysis is not often addressed or discussed in the literature. On examining the reliability of classifications in an empirical study of study groups' academic discussions in computer conferences in a distance education course, the present author found the reliability to be extraordinarily low. For some classifications the deviation was as high as 13% when the same person (coder) classified the same computer conference message at two different times. When two different coders classified the same computer conference messages, the deviation was as high as 27%. This low reliability-and the lack of discussion of this crucial matter in the literature-has profound implications. Not just for the author's own research but for all studies and results based upon content analysis of computer conference communication. Therefore, this issue needs to be addressed. A possible solution-where each computer conference message can be classified as having both one and/or other kinds of information-is proposed. This might not be a solution to the problem of low reliability of content analysis and the use of classification systems, but it does shed light on the problem and goes some way towards reducing it.

16. Some Aspects in High Quality Reliability Data Collection and Analysis

International Nuclear Information System (INIS)

Living probabilistic safety assessment of nuclear power plant requires quantitative reliability parameters. To obtain high quality reliability data in complicated systems such as nuclear power plant, there needs to understand hardware such as plant, systems, and components, and to consider software such as culture, human, management, organization, and to understand plant life cycles such as design, installation, operation and maintenance in a wholly integrated manner. Now we are in a situation to set up of a new establishment of reliability database systems in Korea for living PSA in near future. A few but not less important cases to be reminded which I have experienced during the very initial phase of reliability data collection and analysis for the sample plant and components are introduced here. (author)

17. Distribution-level electricity reliability: Temporal trends using statistical analysis

International Nuclear Information System (INIS)

This paper helps to address the lack of comprehensive, national-scale information on the reliability of the U.S. electric power system by assessing trends in U.S. electricity reliability based on the information reported by the electric utilities on power interruptions experienced by their customers. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. We find that reported annual average duration and annual average frequency of power interruptions have been increasing over time at a rate of approximately 2% annually. We find that, independent of this trend, installation or upgrade of an automated outage management system is correlated with an increase in the reported annual average duration of power interruptions. We also find that reliance on IEEE Standard 1366-2003 is correlated with higher reported reliability compared to reported reliability not using the IEEE standard. However, we caution that we cannot attribute reliance on the IEEE standard as having caused or led to higher reported reliability because we could not separate the effect of reliance on the IEEE standard from other utility-specific factors that may be correlated with reliance on the IEEE standard. - Highlights: ? We assess trends in electricity reliability based on the information reported by the electric utilities. ? We use rigorous statistical techniques to account for utility-specific differences. ? We find modest declines in reliability analyzing interruption duration and frequency experienced by utility customers. ? Installation or upgrade of an OMS is correlated to an increase in reported duration of power interruptions. ? We find reliance in IEEE Standard 1366 is correlated with higher reported reliability.

18. How to assess and compare inter-rater reliability, agreement and correlation of ratings: an exemplary analysis of mother-father and parent-teacher expressive vocabulary rating pairs

OpenAIRE

Stolarova, Margarita; Wolf, Corinna; Rinker, Tanja; Brielmann, Aenne

2014-01-01

This report has two main purposes. First, we combine well-known analytical approaches to conduct a comprehensive assessment of agreement and correlation of rating-pairs and to dis-entangle these often confused concepts, providing a best-practice example on concrete data and a tutorial for future reference. Second, we explore whether a screening questionnaire developed for use with parents can be reliably employed with daycare teachers when assessing early expressive vocabulary. A total of 53 ...

19. How to assess and compare inter-rater reliability, agreement and correlation of ratings: an exemplary analysis of mother-father and parent-teacher expressive vocabulary rating pairs

OpenAIRE

MargaritaStolarova; TanjaRinker

2014-01-01

This report has two main purposes. First, we combine well-known analytical approaches to conduct a comprehensive assessment of agreement and correlation of rating-pairs and to dis-entangle these often confused concepts, providing a best-practice example on concrete data and a tutorial for future reference. Second, we explore whether a screening questionnaire deve-loped for use with parents can be reliably employed with daycare teachers when assessing early expressive vocabulary. A total of 53...

20. Notes on numerical reliability of several statistical analysis programs

Science.gov (United States)

1999-01-01

This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

1. Reliability analysis of I-section steel columns designed according to new Brazilian building codes

Scientific Electronic Library Online (English)

André T., Beck; André S., Dória.

2008-06-01

Full Text Available This paper presents an evaluation of the safety of I-section steel columns designed according to the new revision of the Brazilian code for design of steel buildings (NBR8800) and to the code for loads and safety of structures (NBR8681). The safety evaluation is based on structural reliability analy [...] sis of columns designed to comply with these codes, and on advanced (FE-based) analysis of actual column resistance. The effects of geometrical imperfections and residual stresses in column resistance are taken into account. The uncertainty in yield stress, elasticity modulus, geometrical imperfections and dead and live loads are considered in the reliability evaluation. Reliability indexes are obtained for several column configurations. These indexes reflect the safety of the columns designed according to the two building codes. Reliability indexes are compared with target reliability indexes used in calibration of the ANSI code and with indexes proposed in the new EUROCODE.

2. Comparing the Reliability of Regular Topologies on a Backbone Network. A Case Study

DEFF Research Database (Denmark)

Cecilio, Sergio Labeage; Gutierrez Lopez, Jose Manuel

2009-01-01

The aim of this paper is to compare the reliability of regular topologies on a backbone network. The study is focused on a large-scale fiberoptic network. Different regular topological solutions as single ring, double ring or 4-Regular grid are applied to the case study, and compared in terms of degree, diameter, average distance, economical cost and availability. Furthermore, other non-quantitative parameters such as expandability, embeddability and algorithmic support are introduced.

3. Reliability Analysis of Dynamic Stability in Waves

DEFF Research Database (Denmark)

SØborg, Anders Veldt

2004-01-01

The assessment of a ship's intact stability is traditionally based on a semi-empirical deterministic concept that evaluates the characteristics of ship's calm water restoring leverarm curves. Today the ship is considered safe with respect to dynamic stability if its calm water leverarm curves exhibit sufficient characteristics with respect to slope at zero heel (GM value), maximum leverarm, positive range of stability and area below the leverarm curve. The rule-based requirements to calm water leverarm curves are entirely based on experience obtained from vessels in operation and recorded accidents in the past. The rules therefore only leaves little room for evaluation and improvement of safety of a ship's dynamic stability. A few studies have evaluated the probability of ship stability loss in waves using Monte Carlo simulations. However, since this probability may be in the order of 10-4 per ship year such brute force Monte-Carlo simulations are not always feasible due to the required computational resources. Previous studies of dynamic stability of ships in waves typically focused on the capsizing event. In this study the objective is to establish a procedure that can identify "critical wave patterns" that most likely will lead to the occurrence of a considered adverse event. Examples of such adverse events are stability loss, loss of maneuverability, cargo damage, and seasickness. The adverse events related to dynamic stability are considered as a function of the roll angle, the roll velocity, and the roll acceleration. This study will therefore describe how considered adverse events can be combined into a single utility function that in its scale expresses different magnitudes of the criticality (or assessed consequences) of the adverse events. It will be illustrated how the distribution of the exceedance probability may be established by an estimation of the out-crossing rate of the "safe set" defined by the utility function. This out-crossing rate will be established using the so-called Madsen's Formula.A bi-product of this analysis is a set of short wave time series that at different exceedance levels may be used in a codified evaluation of a vessels intact stability in waves.

4. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

Science.gov (United States)

Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William

2009-01-01

This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

5. Comparative analysis of collaboration networks

International Nuclear Information System (INIS)

In this paper we carry out a comparative analysis of the word network as the collaboration network based on the novel by M. Bulgakov 'Master and Margarita', the synonym network of the Russian language as well as the Russian movie actor network. We have constructed one-mode projections of these networks, defined degree distributions for them and have calculated main characteristics. In the paper a generation algorithm of collaboration networks has been offered which allows one to generate networks statistically equivalent to the studied ones. It lets us reveal a structural correlation between word network, synonym network and movie actor network. We show that the degree distributions of all analyzable networks are described by the distribution of q-type.

6. Comparative Analysis of Classifier Fusers

Directory of Open Access Journals (Sweden)

Marcin Zmyslony

2012-06-01

Full Text Available There are many methods of decision making by an ensemble of classifiers. The most popular are methods that have their origin in voting method, where the decision of the common classifier is a combination of individual classifiers’ outputs. This work presents comparative analysis of some classifier fusion methods based on weighted voting of classifiers’ responses and combination of classifiers’ discriminant functions. We discus different methods of producing combined classifiers based on weights. We show that it is notpossible to obtain classifier better than an abstract model of committee known as an Oracle if it is based only on weighted voting but models based on discriminant function or classifier using feature values and class numbers could outperform the Oracle as well. Delivered conclusions are confirmed by the results of computer experiments carried out on benchmark and computer generated data.

7. Architecture based Reliability Analysis of Continuously Running Concurrent Software Applications

Directory of Open Access Journals (Sweden)

Rehab A. El Kharboutly

2008-01-01

Full Text Available The objective of this paper is to describe a reliability and availability analysis methodology for a continuously running, concurrent application. We propose a state space approach to represent the architecture of a concurrent application, which is then mapped to an irreducible discrete time Markov chain (DTMC to obtain architectural statistics. We discuss how the application architecture can be extracted from profile data to facilitate the use of our methodology to analyze the reliability of practical software applications. We illustrate the methodology using a case study of a MRSS news ticker application. The state space explosion issue which may arise in the practical application of the methodology is then discussed and methods to alleviate the issue are suggested. To the best of our knowledge, this research is one of the first steps in pushing the state of the art in architecturebased software reliability analysis from sequential to concurrent software applications.

8. Human reliability analysis of Lingao Nuclear Power Station

International Nuclear Information System (INIS)

The necessity of human reliability analysis (HRA) of Lingao Nuclear Power Station are analyzed, and the method and operation procedures of HRA is briefed. One of the human factors events (HFE) is analyzed in detail and some questions of HRA are discussed. The authors present the analytical results of 61 HFEs, and make a brief introduction of HRA contribution to Lingao Nuclear Power Station

9. Intra-rater reliability of the posture analysis tool kit

Scientific Electronic Library Online (English)

Ronette, Hough; Riette, Nel.

2013-04-01

Full Text Available BACKGROUND: Health care professionals mainly assess posture through qualitative observation of the relationship between a plumb line and specified anatomical landmarks. However, quantitative assessments of spinal alignment are mostly done by biophotogrammetry and are limited to laboratory environmen [...] ts. The Posture Analysis Toolkit (PAT), a photogrammetric measurement instrument was developed in 2009 to assess standing posture. AIM: The aim of this study was to test the intra-rater reliability of the Posture Analysis Toolkit. METHODOLOGY: A prospective, cross-sectional study design was conducted. Fourteen participants were required to do three measurements of the posture of a single subject using the PAT. Photographs of the anterior and left lateral upright standing posture were taken once, and imported three times for computerised analysis. Reliability was determined using descriptive statistics per session, confidence interval for the median difference between sessions, 95% limits of agreement and Spearman correlations. RESULTS: In this study the intra-rater reliability of PAT between sessions was good. CONCLUSION: The Posture Analysis Toolkit was tested and proved to be reliable for use as an instrument for the assessment of standing postural alignment. Recommendations are suggested for the development of the PAT.

10. Reliability analysis - systematic approach based on limited data

International Nuclear Information System (INIS)

The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)

11. Reliability analysis - a systematic approach based on limited data

International Nuclear Information System (INIS)

The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas, and the maximum use of limited reliability data. (author)

12. Failures Analysis and Reliability Calculation for Power Transformers

Directory of Open Access Journals (Sweden)

M. Mirzai

2006-03-01

Full Text Available Failures of transformers in sub-transmission systems not only reduce reliability of power system but also have significant effects on power quality since one of the important components of any system quality is reliability of that system. To enhance utility reliability, failure analysis and its rates, failure origin and physical damage causes must be studied. This paper describes a case study of the reliability of sub-transmission transformers (63/20 KV installed in Mazandaran province, operated in sub-transmission system. The information obtained from Meandering Regional Electric Company. The results of study and analysis on 60 substation including more than 110 transformers installed in sub-transmission system show that the failure modes of transformers can be represented by Weibull distribution. Weibull statistics have been widely used and accepted as a successful mathematical method to predict the remaining life time of any equipment. Useful conclusions are presented both for power systems operators and manufactures for improving the reliability of transformers.

13. Reliability analysis of the WWER-440 Type 213 safety systems

International Nuclear Information System (INIS)

The design and function of two systems included in the localization of design basis accidents in the Czechoslovak nuclear power plant at Mochovce (four WWER-440 units of type 213) are described. These accidents are represented by LOCA (I.D. 500) with loss of in-site power and maximum rated earthquake (6 balls on the MSK-64 scale). The passive pressure accumulator system and the low pressure injection system of ECCS (LPIS) were chosen for the reliability analysis (fault tree); attention was centered primarily on the effect of human error. Uncertainty analysis of the two systems was carried out as the next step. The results of the reliability analysis are presented in graphical and tabulated forms. The analysis revealed a great importance of correct and verified selection of the top event of the fault tree (the passive system) and stressed the need for periodical testing during the refueling on the LPIS system. (author). 14 figs., 1 tab., 3 refs

14. Beyond reliability, multi-state failure analysis of satellite subsystems: A statistical approach

International Nuclear Information System (INIS)

Reliability is widely recognized as a critical design attribute for space systems. In recent articles, we conducted nonparametric analyses and Weibull fits of satellite and satellite subsystems reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we extend our investigation of failures of satellites and satellite subsystems beyond the binary concept of reliability to the analysis of their anomalies and multi-state failures. In reliability analysis, the system or subsystem under study is considered to be either in an operational or failed state; multi-state failure analysis introduces 'degraded states' or partial failures, and thus provides more insights through finer resolution into the degradation behavior of an item and its progression towards complete failure. The database used for the statistical analysis in the present work identifies five states for each satellite subsystem: three degraded states, one fully operational state, and one failed state (complete failure). Because our dataset is right-censored, we calculate the nonparametric probability of transitioning between states for each satellite subsystem with the Kaplan-Meier estimator, and we derive confidence intervals for each probability of transitioning between states. We then conduct parametric Weibull fits of these probabilities using the Maximum Likelihood Estimation (MLE) approach. After validating the results, we compare the reliability versus multi-state failure analyses of three satellite subsystems: the thruster/fuel; the telemetry, tracking, and control (TTC); and the gyro/sensor/reaction wheel subsystems. The results are particularly revealing of the insights that can be gleaned from multi-state failure analysis and the deficiencies, or blind spots, of the traditional reliability analysis. In addition to the specific results provided here, which should prove particularly useful to the space industry, this work highlights the importance of conducting, beyond the traditional reliability analysis, multi-state failure analysis of any engineering system when seeking to understand its failure behavior.

15. Risk Analysis for Critical Systems with Reliability Block Diagrams

OpenAIRE

Weyns, Kim; Höst, Martin

2012-01-01

Governmental organisations are becoming more critically dependant on IT systems such as communication systems or patient data systems, both for their everyday tasks and their role in crisis relief activities. Therefore it is important for the organisation to analyse the reliability of these systems as part of the organisation’s risk and vulnerability analysis process. This paper presents a practical risk analysis method for critical, large-scale IT systems in an organisation. The method is ba...

16. Human Reliability Analysis for Digital Human-Machine Interfaces

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring

2014-06-01

This paper addresses the fact that existing human reliability analysis (HRA) methods do not provide guidance on digital human-machine interfaces (HMIs). Digital HMIs are becoming ubiquitous in nuclear power operations, whether through control room modernization or new-build control rooms. Legacy analog technologies like instrumentation and control (I&C) systems are costly to support, and vendors no longer develop or support analog technology, which is considered technologically obsolete. Yet, despite the inevitability of digital HMI, no current HRA method provides guidance on how to treat human reliability considerations for digital technologies.

17. Analysis on Reliability of Wine Tasters’ Evaluation Results Based on the Analysis of Variance

Directory of Open Access Journals (Sweden)

Wang Yufei

2013-10-01

Full Text Available Based on the data related to the evaluation score of wine taster provided in 2012 CUMCM, this study firstly adopts confidence interval method to eliminate the effect of wine tasters’ personal differences. Then, by using analysis of variance, we make a test of significance on evaluation results of wine tasters from Group A and B at the significance level of 0.05. Results show that there is no significant difference in the sensory evaluation results of wine tasters from the two groups. By comparing the variance of comprehensive scores given by wine tasters from the two groups, we confirm the evaluation results of wine tasters from which group are more reliable. Results of the model shows that variances of evaluation results given by wine tasters from Group B are all smaller than that of Group A, which prove that evaluation result of wine tasters from Group B is more reliable.

18. Accident Sequence Evaluation Program: Human reliability analysis procedure

International Nuclear Information System (INIS)

This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs

19. Reliability analysis of rotor blades of tidal stream turbines

International Nuclear Information System (INIS)

20. Strength Reliability Analysis of Turbine Blade Using Surrogate Models

Directory of Open Access Journals (Sweden)

Wei Duan

2014-05-01

Full Text Available There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element Method (FEM, a surrogate model and Monte Carlo Simulation (MCS, is applied to solve the blade reliability analysis. Based on the blade finite element parametrical model and the experimental design, two kinds of surrogate models, Polynomial Response Surface (PRS and Artificial Neural Network (ANN, are applied to construct the approximation analytical expressions between the blade responses (including maximum stress and deflection and random input variables, which act as a surrogate of finite element solver to drastically reduce the number of simulations required. Then the surrogate is used for most of the samples needed in the Monte Carlo method and the statistical parameters and cumulative distribution functions of the maximum stress and deflection are obtained by Monte Carlo simulation. Finally, the probabilistic sensitivities analysis, which combines the magnitude of the gradient and the width of the scatter range of the random input variables, is applied to evaluate how much the maximum stress and deflection of the blade are influenced by the random nature of input parameters.

1. Accident Sequence Evaluation Program: Human reliability analysis procedure

Energy Technology Data Exchange (ETDEWEB)

Swain, A.D.

1987-02-01

This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

2. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

Science.gov (United States)

Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

2004-01-01

A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

3. A framework for intelligent reliability centered maintenance analysis

International Nuclear Information System (INIS)

To improve the efficiency of reliability-centered maintenance (RCM) analysis, case-based reasoning (CBR), as a kind of artificial intelligence (AI) technology, was successfully introduced into RCM analysis process, and a framework for intelligent RCM analysis (IRCMA) was studied. The idea for IRCMA is based on the fact that the historical records of RCM analysis on similar items can be referenced and used for the current RCM analysis of a new item. Because many common or similar items may exist in the analyzed equipment, the repeated tasks of RCM analysis can be considerably simplified or avoided by revising the similar cases in conducting RCM analysis. Based on the previous theory studies, an intelligent RCM analysis system (IRCMAS) prototype was developed. This research has focused on the description of the definition, basic principles as well as a framework of IRCMA, and discussion of critical techniques in the IRCMA. Finally, IRCMAS prototype is presented based on a case study

4. Reliability analysis of two unit parallel repairable industrial system

Directory of Open Access Journals (Sweden)

Mohit Kumar Kakkar

2015-09-01

Full Text Available The aim of this work is to present a reliability and profit analysis of a two-dissimilar parallel unit system under the assumption that operative unit cannot fail after post repair inspection and replacement and there is only one repair facility. Failure and repair times of each unit are assumed to be uncorrelated. Using regenerative point technique various reliability characteristics are obtained which are useful to system designers and industrial managers. Graphical behaviors of mean time to system failure (MTSF and profit function have also been studied. In this paper, some important measures of reliability characteristics of a two non-identical unit standby system model with repair, inspection and post repair are obtained using regenerative point technique.

5. Reliability Analysis of Free Jet Scour Below Dams

Directory of Open Access Journals (Sweden)

Chuanqi Li

2012-12-01

Full Text Available Current formulas for calculating scour depth below of a free over fall are mostly deterministic in nature and do not adequately consider the uncertainties of various scouring parameters. A reliability-based assessment of scour, taking into account uncertainties of parameters and coefficients involved, should be performed. This paper studies the reliability of a dam foundation under the threat of scour. A model for calculating the reliability of scour and estimating the probability of failure of the dam foundation subjected to scour is presented. The Maximum Entropy Method is applied to construct the probability density function (PDF of the performance function subject to the moment constraints. Monte Carlo simulation (MCS is applied for uncertainty analysis. An example is considered, and there liability of its scour is computed, the influence of various random variables on the probability failure is analyzed.

6. Maintenance management of railway infrastructures based on reliability analysis

International Nuclear Information System (INIS)

Railway infrastructure maintenance plays a crucial role for rail transport. It aims at guaranteeing safety of operations and availability of railway tracks and related equipment for traffic regulation. Moreover, it is one major cost for rail transport operations. Thus, the increased competition in traffic market is asking for maintenance improvement, aiming at the reduction of maintenance expenditures while keeping the safety of operations. This issue is addressed by the methodology presented in the paper. The first step of the methodology consists of a family-based approach for the equipment reliability analysis; its purpose is the identification of families of railway items which can be given the same reliability targets. The second step builds the reliability model of the railway system for identifying the most critical items, given a required service level for the transportation system. The two methods have been implemented and tested in practical case studies, in the context of Rete Ferroviaria Italiana, the Italian public limited company for railway transportation.

7. Improvement in check valve reliability by integrity analysis of internals

International Nuclear Information System (INIS)

The Electric Power Research Institute (EPRI) reports that valve unreliability is a major cause of plant downtime. The Institute of Nuclear Power Operations (INPO) issued a Significant Operating Experience Report (SOER) No. 86-03 that provides the plant owner guidance on check valve surveillance (inspection and testing) to improve reliability. The material condition of internal parts plays a key role in assuring reliability. The Service Water System at Comanche Peak operated to meet system functional needs for approximately seven years before the plant received an operating license. The failure of a cast 17-4 PH stainless steel disc pin hinge (swing arm) in a valve installed in this system resulted in recently issued NRC Information Notice 90-03. This paper summarizes work completed to assure the reliability of similar swing check valves at TU Electric's Comanche Peak Steam Electric Station (CPSES). Suitability for corrosive service was evaluated. Linear elastic fracture mechanics established acceptance criteria, surface inspection and in-place metallography were employed to screen defective cast material. Retrospective statistical analysis of inspection results was used to quantify success of the inspection and estimate improvement in valve reliability. Check valves having the same material with similar operating conditions have been installed at other plants. Other components having sand cast 17-4 PH stainless steel parts also may be affected. A strategy is proposed for minimizing impact of material defects and age-related degradation on valve reliability

8. Development of a human reliability analysis methodology during PHWR outages

International Nuclear Information System (INIS)

The quality of probabilistic safety assessment (PSA) has become more important with emphasis on risk-informed regulation and application (RIR and A) in nuclear power industry. Human reliability analysis is definitely related to the quality of PSA because human errors have been identified as major contributors to PSA. According to NRC's 'Office of analysis and evaluation of operational data (AEOD)', 82% of the reactor trips and accident during outage is caused by the events related to human errors. There is, however, no one HRA method universally accepted. Furthermore, HRA during PHWR outage has not been performed around the world. Therefore, it is necessary to perform HRA during PHWR outage for the purpose of increasing the quality of PHWR PSA. In this study, Event Trees during PHWR outage were developed, and 10 human actions which should be quantified were derived from the Event Trees. A new HRA model during PHWR outage was also developed, the human actions were quantified with this model. The quantified values were compared with the values from 'Generic CANDU Probabilistic Safety Assessment' and the values estimated using ASEP. Core Damage Frequency was estimated of 6.96 x 10-5 using the Event Trees and the HRA model. It was 17 % higher value than CDF estimated using AECL data. It was considered the differences between the HEPs for OPHTS make CDF higher. Therefore, complementary study of reestimating HEP for OPHTS in detail is required for increasing the qualities of HRA and PSA. It is considered the Event Trees and the HRA model developed in this study can be important references when performing PSA during PHWR outage in the future

9. Reliability, Validity, Comparability and Practical Utility of Cybercrime-Related Data, Metrics, and Information

OpenAIRE

Nir Kshetri

2013-01-01

With an increasing pervasiveness, prevalence and severity of cybercrimes, various metrics, measures and statistics have been developed and used to measure various aspects of this phenomenon. Cybercrime-related data, metrics, and information, however, pose important and difficult dilemmas regarding the issues of reliability, validity, comparability and practical utility. While many of the issues of the cybercrime economy are similar to other underground and underworld industries, this economy ...

10. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)

Science.gov (United States)

Butler, R. W.

1994-01-01

SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running

11. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)

Science.gov (United States)

Butler, R. W.

1994-01-01

SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running

12. Reliability analysis of maintenance operations for railway tracks

International Nuclear Information System (INIS)

Railway engineering is confronted with problems due to degradation of the railway network that requires important and costly maintenance work. However, because of the lack of knowledge on the geometrical and mechanical parameters of the track, it is difficult to optimize the maintenance management. In this context, this paper presents a new methodology to analyze the behavior of railway tracks. It combines new diagnostic devices which permit to obtain an important amount of data and thus to make statistics on the geometric and mechanical parameters and a non-intrusive stochastic approach which can be coupled with any mechanical model. Numerical results show the possibilities of this methodology for reliability analysis of different maintenance operations. In the future this approach will give important informations to railway managers to optimize maintenance operations using a reliability analysis

13. Reliability Analysis of Metro Door System Based on FMECA

OpenAIRE

Xiaoqing Cheng; Zongyi Xing; Yong Qin; Yuan Zhang; Shaohuang Pang; Jun Xia

2013-01-01

The metro door system is one of the high failure rate subsystems of metro trains. The Failure Mode, Effects and Criticality Analysis (FMECA) method is applied to analyze the reliability of metro door system in this paper. Firstly, failure components of the door are statistically analyzed, and the major failure components are determined. Secondly, failures are classified according to their impacts on operation, and methods of calculating failure mode criticality and the related coefficients ar...

14. Evaluation of the reliability of computerized profile cephalometric analysis

OpenAIRE

Ferreira José Tarcísio Lima; Telles Carlos de Souza

2002-01-01

The use of computers as an auxiliary instrument for case evaluation and procedures in health sciences is not new, and their advantages are well known. A growing number of orthodontists are using computerized systems for cephalometric analysis. Thus, this study evaluated the reliability of both computerized and manual methods used for creating profile cephalograms. Fifty profile radiographs were selected from the files of the Post-Graduate Course in Orthodontics at the Dental School of the Fed...

15. Reliability analysis of the ventilation control system in CEFR

International Nuclear Information System (INIS)

This paper describes the function and structure of airiness control system in CEFR. In this paper, based on FMEA and FTA, the reliability of the ventilation control system of CEFR was analyzed. The quantitative analysis and calculation about the fault tree, with the failure rate of fault tree and minimal cut sets are obtained. The data obtained are useful to support the management of the ventilation system of CEFR. (author)

16. Reliability analysis of crack propagation behavior of reactor components

International Nuclear Information System (INIS)

A reliability analysis was carried out on a circumferential weld in the main coolant loop of a PWR with the aim of estimating the probability of a leak or break occurring in the planned life cycle of the plant. To get a basis for the reliability analysis the following influence factors were more closely examined: initial crack extent, load spectrum including the emergency 'earthquake' situation and crack growth characteristics. For the actual reliability analysis a computer program was developed, which took the individual input data, in accordance with their statistical parameter, into account in a simulation calculation in line with the Monte Carlo Method. The Forman Formula was used to estimate the fatigue crack growth caused by the sequence of load events. The result was, that the fatigue crack growth, even in the case of large initial cracks, was negligibly small. The probability, that, in the case of very deep initial cracks, one-off high quasi-static load, e.g. during an earthquake, could cause a locally limited crack-through, was estimated to be about 5x10-6 in forty years. (orig./HP)

17. Fiber Access Networks: Reliability Analysis and Swedish Broadband Market

Science.gov (United States)

Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp

Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.

18. Solid Rocket Booster Large Main and Drogue Parachute Reliability Analysis

Science.gov (United States)

Clifford, Courtenay B.; Hengel, John E.

2009-01-01

The parachutes on the Space Transportation System (STS) Solid Rocket Booster (SRB) are the means for decelerating the SRB and allowing it to impact the water at a nominal vertical velocity of 75 feet per second. Each SRB has one pilot, one drogue, and three main parachutes. About four minutes after SRB separation, the SRB nose cap is jettisoned, deploying the pilot parachute. The pilot chute then deploys the drogue parachute. The drogue chute provides initial deceleration and proper SRB orientation prior to frustum separation. At frustum separation, the drogue pulls the frustum from the SRB and allows the main parachutes that are mounted in the frustum to unpack and inflate. These chutes are retrieved, inspected, cleaned, repaired as needed, and returned to the flight inventory and reused. Over the course of the Shuttle Program, several improvements have been introduced to the SRB main parachutes. A major change was the replacement of the small (115 ft. diameter) main parachutes with the larger (136 ft. diameter) main parachutes. Other modifications were made to the main parachutes, main parachute support structure, and SRB frustum to eliminate failure mechanisms, improve damage tolerance, and improve deployment and inflation characteristics. This reliability analysis is limited to the examination of the SRB Large Main Parachute (LMP) and drogue parachute failure history to assess the reliability of these chutes. From the inventory analysis, 68 Large Main Parachutes were used in 651 deployments, and 7 chute failures occurred in the 651 deployments. Logistic regression was used to analyze the LMP failure history, and it showed that reliability growth has occurred over the period of use resulting in a current chute reliability of R = .9983. This result was then used to determine the reliability of the 3 LMPs on the SRB, when all must function. There are 29 drogue parachutes that were used in 244 deployments, and no in-flight failures have occurred. Since there are no observed drogue chute failures, Jeffreys Prior was used to calculate a reliability of R =.998. Based on these results, it is concluded that the LMP and drogue parachutes on the Shuttle SRB are suited to their mission and changes made over their life have improved the reliability of the parachute.

19. Reliability engineering analysis of ATLAS data reprocessing campaigns

International Nuclear Information System (INIS)

During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.

20. Maximizing personnel performance in plantwide reliability-centered maintenance analysis

International Nuclear Information System (INIS)

Techniques have been developed and proven effective that can be used to reduce the staffing requirements for implementation of a full-scale plantwide reliability-centered maintenance (RCM) program. The multiphase projects discussed in this paper integrate RCM into a broad-based preventive maintenance program concept that includes several programmatic functions, such as system reliability modeling, computerized RCM data bases, and quantitative and qualitative cost/benefit analyses. Through the use of computerized data bases, system models, equipment failure tracking, and preventive maintenance program effectiveness evaluations, the RCM programs can create a living reliability-based preventive maintenance program. The completed RCM data can be maintained and updated with current plant design and equipment performance history without large staffs of RCM analysts. The goal of any RCM project is the development of a structures and well-justified preventive maintenance program that ensures the reliability of plant components is maximized to the extent that an appropriate cost/benefit is achieved in return for the maintenance dollars expended. The methods discussed were developed for two full-scale RCM projects that are useful when implementing the program on a plantwide basis. The typical approach to RCM has been to establish a process on one pilot system and then continue the RCM process on selected systems periodically. By implementing an RCM program on a plantwide basis, benefits can be realized from and economy of scale and through the implementation of labor-saving analysis aids

1. Analysis and Reliability Performance Comparison of Different Facial Image Features

Directory of Open Access Journals (Sweden)

2014-11-01

Full Text Available This study performs reliability analysis on the different facial features with weighted retrieval accuracy on increasing facial database images. There are many methods analyzed in the existing papers with constant facial databases mentioned in the literature review. There were not much work carried out to study the performance in terms of reliability and also how the method will perform on increasing the size of the database. In this study certain feature extraction methods were analyzed on the regular performance measure and also the performance measures are modified to fit the real time requirements by giving weight ages for the closer matches. In this study four facial feature extraction methods are performed, they are DWT with PCA, LWT with PCA, HMM with SVD and Gabor wavelet with HMM. Reliability of these methods are analyzed and reported. Among all these methods Gabor wavelet with HMM gives more reliability than other three methods performed. Experiments are carried out to evaluate the proposed approach on the Olivetti Research Laboratory (ORL face database.

2. High-Reliable PLC RTOS Development and RPS Structure Analysis

International Nuclear Information System (INIS)

One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

3. Analysis of emergency diesel generators for improved reliability

International Nuclear Information System (INIS)

Nuclear generating station emergency diesel generators are among the most critical safeguards systems because of their need to operate as designed in the event of a loss of off-site power and to be operational to permit nuclear unit operation. This paper will detail the need for analysis of diesel engines to ensure reliability, performance, and availability of the diesel generator and nuclear unit. The requirements for a state-of-the-art analysis program will be given, showing the benefits derived from digital data collection and computer aided diagnostics. These benefits include more frequent analysis, improved scheduling of tests and historical comparison and trending of data. Commonwealth Edison operates twenty-four emergency diesel generators at six nuclear generating stations. Case studies of actual malfunctions detected will be used to illustrate analysis methods and the capabilities of their engine analysis program

4. Sociological analysis and comparative education

Science.gov (United States)

Woock, Roger R.

1981-12-01

It is argued that comparative education is essentially a derivative field of study, in that it borrows theories and methods from academic disciplines. After a brief humanistic phase, in which history and philosophy were central for comparative education, sociology became an important source. In the mid-50's and 60's, sociology in the United States was characterised by Structural Functionalism as a theory, and Social Survey as a dominant methodology. Both were incorporated into the development of comparative education. Increasingly in the 70's, and certainly today, the new developments in sociology are characterised by an attack on Positivism, which is seen as the philosophical position underlying both functionalism and survey methods. New or re-discovered theories with their attendant methodologies included Marxism, Phenomenological Sociology, Critical Theory, and Historical Social Science. The current relationship between comparative education and social science is one of uncertainty, but since social science is seen to be returning to its European roots, the hope is held out for the development of an integrated social theory and method which will provide a much stronger basis for developments in comparative education.

5. Some developments in human reliability analysis approaches and tools

International Nuclear Information System (INIS)

Since human actions have been recognized as an important contributor to safety of operating plants in most industries, research has been performed to better understand and account for the way operators interact during accidents through the control room and equipment interface. This paper describes the integration of a series of research projects sponsored by the Electric Power Research Institute to strengthen the methods for performing the human reliability analysis portion of the probabilistic safety studies. It focuses on the analytical framework used to guide the analysis, the development of the models for quantifying time-dependent actions, and simulator experiments used to validate the models. (author)

6. Structural Reliability Analysis and Optimization: Use of Approximations

Science.gov (United States)

Grandhi, Ramana V.; Wang, Liping

1999-01-01

This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.

7. Damage tolerance reliability analysis of automotive spot-welded joints

International Nuclear Information System (INIS)

This paper develops a damage tolerance reliability analysis methodology for automotive spot-welded joints under multi-axial and variable amplitude loading history. The total fatigue life of a spot weld is divided into two parts, crack initiation and crack propagation. The multi-axial loading history is obtained from transient response finite element analysis of a vehicle model. A three-dimensional finite element model of a simplified joint with four spot welds is developed for static stress/strain analysis. A probabilistic Miner's rule is combined with a randomized strain-life curve family and the stress/strain analysis result to develop a strain-based probabilistic fatigue crack initiation life prediction for spot welds. Afterwards, the fatigue crack inside the base material sheet is modeled as a surface crack. Then a probabilistic crack growth model is combined with the stress analysis result to develop a probabilistic fatigue crack growth life prediction for spot welds. Both methods are implemented with MSC/NASTRAN and MSC/FATIGUE software, and are useful for reliability assessment of automotive spot-welded joints against fatigue and fracture

8. Design and Analysis for Reliability of Wireless Sensor Network

OpenAIRE

Yongxian Song; Ting Chen; Juanli Ma; Yuan Feng; Xianjin Zhang

2012-01-01

Reliability is an important performance indicator of wireless sensor network, to some application fields, which have high demands in terms of reliability, it is particularly important to ensure reliability of network. At present, the reliability research findings of wireless sensor network are much more at home and abroad, but they mainly improve network reliability from the networks topology, reliable protocol and application layer fault correction and so on, and reliability of network is co...

9. Probabilistic Life and Reliability Analysis of Model Gas Turbine Disk

Science.gov (United States)

Holland, Frederic A.; Melis, Matthew E.; Zaretsky, Erwin V.

2002-01-01

In 1939, W. Weibull developed what is now commonly known as the "Weibull Distribution Function" primarily to determine the cumulative strength distribution of small sample sizes of elemental fracture specimens. In 1947, G. Lundberg and A. Palmgren, using the Weibull Distribution Function developed a probabilistic lifing protocol for ball and roller bearings. In 1987, E. V. Zaretsky using the Weibull Distribution Function modified the Lundberg and Palmgren approach to life prediction. His method incorporates the results of coupon fatigue testing to compute the life of elemental stress volumes of a complex machine element to predict system life and reliability. This paper examines the Zaretsky method to determine the probabilistic life and reliability of a model gas turbine disk using experimental data from coupon specimens. The predicted results are compared to experimental disk endurance data.

10. An improved radial basis function network for structural reliability analysis

International Nuclear Information System (INIS)

Approximation methods such as response surface method and artificial neural network (ANN) method are widely used to alleviate the computation costs in structural reliability analysis. However most of the ANN methods proposed in the literature suffer various drawbacks such as poor choice of parameter setting, poor generalization and local minimum. In this study, a support vector machine-based radial basis function (RBF) network method is proposed, in which the improved RBF model is used to approximate the limit state function and then is connected to a reliability method to estimate failure probability. Since the learning algorithm of RBF network is replaced by the support vector algorithm, the advantage of the latter, such as good generalization ability and global optimization are propagated to the former, thus the inherent drawback of RBF network can be defeated. Numerical examples are given to demonstrate the applicability of the improved RBF network method in structural reliability analysis, as well as to illustrate the validity and effectiveness of the proposed method

11. Modeling of seismic hazards for dynamic reliability analysis

International Nuclear Information System (INIS)

This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)

12. Time-dependent reliability analysis of ceramic engine components

Science.gov (United States)

Nemeth, Noel N.

1993-01-01

The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

13. Review of the treat upgrade reactor scram system reliability analysis

International Nuclear Information System (INIS)

In order to resolve some key LMFBR safety issues, ANL personnel are modifying the TREAT reactor to handle much larger experiments. As a result of these modifications, the upgraded Treat reactor will not always operate in a self-limited mode. During certain experiments in the upgraded TREAT reactor, it is possible that the fuel could be damaged by overheating if, once the computer systems fail, the reactor scram system (RSS) fails on demand. To help ensure that the upgraded TREAT reactor is shut down when required, ANL personnel have designed a triply redundant RSS for the facility. The RSS is designed to meet three reliability goals: (1) a loss of capability failure probability of 10-9/demand (independent failures only); (2) an inadvertent shutdown probability of 10-3/experiment; and (3) protection agaist any known potential common cause failures. According to ANL's reliability analysis of the RSS, this system substantially meets these goals

14. Reliability analysis and prediction of mixed mode load using Markov Chain Model

International Nuclear Information System (INIS)

The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

15. Reliability analysis and prediction of mixed mode load using Markov Chain Model

Energy Technology Data Exchange (ETDEWEB)

Nikabdullah, N. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia and Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Singh, S. S. K.; Alebrahim, R.; Azizi, M. A. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); K, Elwaleed A. [Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Noorani, M. S. M. [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia (Malaysia)

2014-06-19

The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

16. Improvement of human reliability analysis method for PRA

International Nuclear Information System (INIS)

It is required to refine human reliability analysis (HRA) method by, for example, incorporating consideration for the cognitive process of operator into the evaluation of diagnosis errors and decision-making errors, as a part of the development and improvement of methods used in probabilistic risk assessments (PRAs). JNES has been developed a HRA method based on ATHENA which is suitable to handle the structured relationship among diagnosis errors, decision-making errors and operator cognition process. This report summarizes outcomes obtained from the improvement of HRA method, in which enhancement to evaluate how the plant degraded condition affects operator cognitive process and to evaluate human error probabilities (HEPs) which correspond to the contents of operator tasks is made. In addition, this report describes the results of case studies on the representative accident sequences to investigate the applicability of HRA method developed. HEPs of the same accident sequences are also estimated using THERP method, which is most popularly used HRA method, and comparisons of the results obtained using these two methods are made to depict the differences of these methods and issues to be solved. Important conclusions obtained are as follows: (1) Improvement of HRA method using operator cognitive action model. Clarification of factors to be considered in the evaluation of human errors, incorporation of degraded plant safety condition into HRA and investigation of HEPs which are affected by the contents of operator tasks were made to improve the HRA method which can integrate operator cognitive action model into ATHENA method. In addition, the detail of procedures of the improved method was delineated in the form of flowchart. (2) Case studies and comparison with the results evaluated by THERP method. Four operator actions modeled in the PRAs of representative BWR5 and 4-loop PWR plants were selected and evaluated as case studies. These cases were also evaluated using THERP method to compare the results with the improved method. In general, HEPs evaluated by the improved method are greater than HEPs evaluated by THERP method. (3) Characteristics of the improved HRA method. As many reference tables which can be applied for various cases are prepared in the improved HRA method, the prospect of realization of reproducibility, i.e. similar results are obtained independently to analysis, and traceability, i.e. process to the final results is clear and is also to be shared among analysts, can be said to be achieved. (author)

17. Commingled Samples: A Neglected Source of Bias in Reliability Analysis

Science.gov (United States)

Waller, Niels G.

2008-01-01

Reliability is a property of test scores from individuals who have been sampled from a well-defined population. Reliability indices, such as coefficient and related formulas for internal consistency reliability (KR-20, Hoyt's reliability), yield lower bound reliability estimates when (a) subjects have been sampled from a single population and when…

18. The reliability of an instrumented start block analysis system.

Science.gov (United States)

Tor, Elaine; Pease, David L; Ball, Kevin A

2015-02-01

The swimming start is highly influential to overall competition performance. Therefore, it is paramount to develop reliable methods to perform accurate biomechanical analysis of start performance for training and research. The Wetplate Analysis System is a custom-made force plate system developed by the Australian Institute of Sport--Aquatic Testing, Training and Research Unit (AIS ATTRU). This sophisticated system combines both force data and 2D digitization to measure a number of kinetic and kinematic parameter values in an attempt to evaluate start performance. Fourteen elite swimmers performed two maximal effort dives (performance was defined as time from start signal to 15 m) over two separate testing sessions. Intraclass correlation coefficients (ICC) were used to determine each parameter's reliability. The kinetic parameters all had ICC greater than 0.9 except the time of peak vertical force (0.742). This may have been due to variations in movement initiation after the starting signal between trials. The kinematic and time parameters also had ICC greater than 0.9 apart from for the time of maximum depth (0.719). This parameter was lower due to the swimmers varying their depth between trials. Based on the high ICC scores for all parameters, the Wetplate Analysis System is suitable for biomechanical analysis of swimming starts. PMID:25268512

19. A Research Roadmap for Computation-Based Human Reliability Analysis

Energy Technology Data Exchange (ETDEWEB)

Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

2015-08-01

The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

20. A Bayesian Framework for Reliability Analysis of Spacecraft Deployments

Science.gov (United States)

Evans, John W.; Gallo, Luis; Kaminsky, Mark

2012-01-01

Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.

1. Reliability and risk analysis using artificial neural networks

Energy Technology Data Exchange (ETDEWEB)

Robinson, D.G. [Sandia National Labs., Albuquerque, NM (United States)

1995-12-31

This paper discusses preliminary research at Sandia National Laboratories into the application of artificial neural networks for reliability and risk analysis. The goal of this effort is to develop a reliability based methodology that captures the complex relationship between uncertainty in material properties and manufacturing processes and the resulting uncertainty in life prediction estimates. The inputs to the neural network model are probability density functions describing system characteristics and the output is a statistical description of system performance. The most recent application of this methodology involves the comparison of various low-residue, lead-free soldering processes with the desire to minimize the associated waste streams with no reduction in product reliability. Model inputs include statistical descriptions of various material properties such as the coefficients of thermal expansion of solder and substrate. Consideration is also given to stochastic variation in the operational environment to which the electronic components might be exposed. Model output includes a probabilistic characterization of the fatigue life of the surface mounted component.

2. Fifty Years of THERP and Human Reliability Analysis

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring

2012-06-01

In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø National Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.

3. IDHEAS – A NEW APPROACH FOR HUMAN RELIABILITY ANALYSIS

Energy Technology Data Exchange (ETDEWEB)

G. W. Parry; J.A Forester; V.N. Dang; S. M. L. Hendrickson; M. Presley; E. Lois; J. Xing

2013-09-01

This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure event (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.

4. Comparative and reliability studies of neuromechanical leg muscle performances of volleyball athletes in different divisions.

Science.gov (United States)

Un, Chi-Pang; Lin, Kwan-Hwa; Shiang, Tzyy-Yuang; Chang, En-Chung; Su, Sheng-Chu; Wang, Hsing-Kuo

2013-02-01

This study compared neural profiles of the leg muscles of volleyball athletes playing in different divisions of Taiwan's national league to analyse the reliability and correlations between their profiles and biomechanical performances. Twenty-nine athletes including 12 and 17 from the first and second divisions of the league, respectively, were recruited. The outcome measures were compared between the divisions, including soleus H-reflex, first volitional (V) wave, normalised rate of electromyography (EMG) rise (RER) in the triceps surae muscles, and RER ratio for the tibialis anterior and soleus muscles, normalised root mean square (RMS) EMG in the triceps surae muscles, antagonist co-activation of the tibialis anterior muscle, rate of force development (RFD), and maximal plantar flexion torque and jump height. Compared to the results of the second division, the neural profiles of the first division showed greater normalised V waves, normalised RER in the lateral gastrocnemius, and normalised RMS EMG of the soleus and lateral gastrocnemius muscles with less antagonist co-activation of the tibialis anterior. First division volleyball athletes showed greater maximal torque, jump height, absolute RFD at 0-30, 0-100, and 0-200 ms, and less in the normalised RFD at 0-200 ms of plantar flexion when compared to the results of those in the second division. Neural profiles correlated to fast or maximal muscle strength or jump height. There are differences in the descending neural drive and activation strategies in leg muscles during contractions between volleyball athletes competing at different levels. These measures are reliable and correlate to biomechanical performances. PMID:22798025

5. Reliability analysis of selected systems of nuclear power unit

International Nuclear Information System (INIS)

The reliability analysis is discussed of selected facilities of the 440 MW nuclear power unit using the failure tree method. The first part of the paper deals with the primary circuit and analyses the possibility of a dangerous failure arising of the system of accident alarm of the first order of the WWER 440 nuclear reactor during the event of the ''outage of four and more circulating pumps''. The second part of the paper is related to the secondary circuit. It studies the causes and probabilities of the failures of functions of condensate flow pumping and control with regard to the event the ''turbogenerator failure''. (author)

6. Application of CBDTM and AP1000 human reliability analysis

International Nuclear Information System (INIS)

To carry out the AP1000 post-accident Human Reliability Analysis (HRA) and model the operator performance shaping factors (PSF) in a proper way, the Cause-based Decision Tree Method (CBDTM) is selected as one of the methods to calculate the probability of diagnostic errors in the AP1000 HRA. The PSFs related to the working load, signal displays, operation procedures and panel design are considered in the CBDTM model, thus the relation among the plant information, operators and procedures can be properly addressed to obtain more reasonable results of AP1000 HRA. (authors)

7. Reliability Analysis of Systems Subject to First-Passage Failure

Science.gov (United States)

Lutes, Loren D.; Sarkani, Shahram

2009-01-01

An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.

8. Integration of human reliability analysis into the high consequence process

Energy Technology Data Exchange (ETDEWEB)

Houghton, F.K.; Morzinski, J.

1998-12-01

When performing a hazards analysis (HA) for a high consequence process, human error often plays a significant role in the hazards analysis. In order to integrate human error into the hazards analysis, a human reliability analysis (HRA) is performed. Human reliability is the probability that a person will correctly perform a system-required activity in a required time period and will perform no extraneous activity that will affect the correct performance. Even though human error is a very complex subject that can only approximately be addressed in risk assessment, an attempt must be made to estimate the effect of human errors. The HRA provides data that can be incorporated in the hazard analysis event. This paper will discuss the integration of HRA into a HA for the disassembly of a high explosive component. The process was designed to use a retaining fixture to hold the high explosive in place during a rotation of the component. This tool was designed as a redundant safety feature to help prevent a drop of the explosive. This paper will use the retaining fixture to demonstrate the following HRA methodology`s phases. The first phase is to perform a task analysis. The second phase is the identification of the potential human, both cognitive and psychomotor, functions performed by the worker. During the last phase the human errors are quantified. In reality, the HRA process is an iterative process in which the stages overlap and information gathered in one stage may be used to refine a previous stage. The rationale for the decision to use or not use the retaining fixture and the role the HRA played in the decision will be discussed.

9. On the use of uncertainty importance measures in reliability and risk analysis

International Nuclear Information System (INIS)

This paper discusses the use of uncertainty importance measures in reliability and risk analysis. Such measures are used to rank the importance of components (activities) of complex systems. The measures reflect to what degree the uncertainties on the component level influence the uncertainties on the system level. An example of such a measure is the change in the variance of the reliability of the system when ignoring the uncertainties in the component reliability. The measures are traditionally based on a Bayesian perspective as knowledge-based (subjective) probabilities express the epistemic uncertainties about the reliability and risk parameters introduced. In this paper we carry out a rethinking of the rationale for such measures. What information do they provide compared to the traditional importance measures such as the improvement potential and the Birnbaum measure? To discuss these issues we distinguish between two situations: (A) the key quantities of interest are observable quantities such as the occurrence of a system failure and the number of failures and (B) the key quantities of interest are fictional parameters constructed to reflect the aleatory uncertainties. A new type of combined sets of measures are introduced based on an integration of a traditional measure and a related uncertainty importance measure. A simple reliability example is used to illustrate the analysis and findings.

10. Nuclear power plant emergency core cooling system reliability analysis - reliability estimation for small LOCA

International Nuclear Information System (INIS)

System performance reliability depends not only on its own availability but also on requirements which are placed the system. This paper shows a way of system performance reliability estimation for a NPP Emergency Core Cooling System in case of small LOCA. The event scenario and requirements for systems are determined with event tree. Finally, the ECCS reliability estimation is performed on the basis of system requirements. (author)

11. Structural reliability codes for probabilistic design : - a debate paper based on elementary reliability and decision analysis concepts

DEFF Research Database (Denmark)

Ditlevsen, Ove Dalager

1997-01-01

For the practical applications of probabilistic reliability methods it is important to make decisions about the target reliability level. Presently calibration to existing design practice seems to be the only practicable and politically reasonable solution to this decision problem. However, several difficulties of ambiguity and definition show up when attempting to make the transition from a given authorized partial safety factor code to a superior probabilistic code. For any chosen probabilistic code format there is a considerable variation of the reliability level over the set of structures defined by the partial safety factor code. Thus, there is a problem about which of these levels to choose as target level. Moreover, if two different probabilistic code formats are considered, then a constant reliability level in the one code does not go together with a constant reliability level in the other code. The last problem must be accepted as the state of the matter and it seems that it can only be solved pragmatically by standardizing a specific code format as reference format for constant reliability. By an example this paper illustrates that a presently valid partial safety factor code imposes a quite considerable variation of the reliability measure as defined by a specific probabilistic code format. Decision theoretical principles are applied to get guidance about which of these different reliability levels of existing practice to choose as target reliability level. Moreover, it is shown that the chosen probabilistic code format has not only strong influence on the formal reliability measure, but also on the formal cost of failure to be associated if a design made to the target reliability level is considered to be optimal. In fact, the formal cost of failure can be different by several orders of size for two different, but by and large equally justifiable probabilistic code formats. Thus, the consequence is that a code format based on decision theoretical concepts and formulated as an extension of a probabilistic code format must specify formal values to be used as costs of failure. A principle of prudence is suggested for guiding the choice of the reference probabilistic code format for constant reliability. In the author's opinion there is an urgent need for establishing a standard probabilistic reliability code. This paper presents some considerations that may be debatable, but nevertheless point at a systematic way to choose such a code.Keywords: Code calibration, Structural reliability, Decision analysis, Reliability index, Partial safetyfactors, Target reliability.

12. Analysis of Mobile Phone Reliability Based on Active Disassembly Using Smart Materials

OpenAIRE

Zhifeng Liu; Liuxian Zhao; Jun Zhong; Xinyu Li; Huanbo Cheng

2011-01-01

When using shape memory materials into active disassembly of actual electronic products, because the elastic modulus of shape memory materials is affected by the temperature is relatively large, therefore, the main difference of environmental reliability between active disassembly products and common products is the impact of collision and vibration under different temperature. Establishing three-dimensional analysis model, comparing the impact of collision and vibration of mobile phone shell...

13. Reliability analysis of nuclear power plant bus systems arrangement based on GO methodology

International Nuclear Information System (INIS)

In this paper the GO method is used for analyzing the reliability of NPP bus system arrangement. Focusing on the typical one and a half breakers bus system, the detailed GO chart of typical work/failure(maintenance) two or three-state-component system is given, and the qualitative and quantitative analysis is conducted. Compared with the FTA results,the correctness and advantage of the GO methodology is verified. (authors)

14. Rent control: a comparative analysis

Scientific Electronic Library Online (English)

S, Maass.

15. RENT CONTROL: A COMPARATIVE ANALYSIS

Directory of Open Access Journals (Sweden)

Sue-Mari Maass

2012-11-01

16. Inclusion of fatigue effects in human reliability analysis

Energy Technology Data Exchange (ETDEWEB)

Griffith, Candice D. [Vanderbilt University, Nashville, TN (United States); Mahadevan, Sankaran, E-mail: sankaran.mahadevan@vanderbilt.edu [Vanderbilt University, Nashville, TN (United States)

2011-11-15

The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: >We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. > We discuss the difficulties in defining and measuring fatigue. > We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.

17. Inclusion of fatigue effects in human reliability analysis

International Nuclear Information System (INIS)

The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: ?We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. ? We discuss the difficulties in defining and measuring fatigue. ? We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.

18. Reliability analysis of single crystal NiAl turbine blades

Science.gov (United States)

Salem, Jonathan; Noebe, Ronald; Wheeler, Donald R.; Holland, Fred; Palko, Joseph; Duffy, Stephen; Wright, P. Kennard

1995-01-01

As part of a co-operative agreement with General Electric Aircraft Engines (GEAE), NASA LeRC is modifying and validating the Ceramic Analysis and Reliability Evaluation of Structures algorithm for use in design of components made of high strength NiAl based intermetallic materials. NiAl single crystal alloys are being actively investigated by GEAE as a replacement for Ni-based single crystal superalloys for use in high pressure turbine blades and vanes. The driving force for this research lies in the numerous property advantages offered by NiAl alloys over their superalloy counterparts. These include a reduction of density by as much as a third without significantly sacrificing strength, higher melting point, greater thermal conductivity, better oxidation resistance, and a better response to thermal barrier coatings. The current drawback to high strength NiAl single crystals is their limited ductility. Consequently, significant efforts including the work agreement with GEAE are underway to develop testing and design methodologies for these materials. The approach to validation and component analysis involves the following steps: determination of the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; measurement of the failure strength envelope of the material; coding of statistically based reliability models; verification of the code and model; and modeling of turbine blades and vanes for rig testing.

19. Tailoring a Human Reliability Analysis to Your Industry Needs

Science.gov (United States)

DeMott, D. L.

2016-01-01

Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

20. Model-based human reliability analysis: prospects and requirements

International Nuclear Information System (INIS)

Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

1. Current Human Reliability Analysis Methods Applied to Computerized Procedures

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring

2012-06-01

Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

2. Comparative Study and Analysis of Variability Tools

OpenAIRE

Bhumula, Mahendra Reddy

2013-01-01

The dissertation provides a comparative analysis of a number of variability tools currently in use. It serves as a catalogue for practitioners interested in the topic. We compare a range of modelling, configuring, and management tools for product line engineering. The tools surveyed are compared against the following criteria: functional, non-functional, governance issues and Technical aspects. The outcome of the analysis is provided in tabular format.

3. Transient Reliability Analysis Capability Developed for CARES/Life

Science.gov (United States)

Nemeth, Noel N.

2001-01-01

4. CARES - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

Science.gov (United States)

Nemeth, N. N.

1994-01-01

The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES calculates the fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings. The program uses results from a commercial structural analysis program (MSC/NASTRAN or ANSYS) to evaluate component reliability due to inherent surface and/or volume type flaws. A multiple material capability allows the finite element model reliability to be a function of many different ceramic material statistical characterizations. The reliability analysis uses element stress, temperature, area, and volume output, which are obtained from two dimensional shell and three dimensional solid isoparametric or axisymmetric finite elements. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multi-axial stress states on material strength. The shear-sensitive Batdorf model requires a user-selected flaw geometry and a mixed-mode fracture criterion. Flaws intersecting the surface and imperfections embedded in the volume can be modeled. The total strain energy release rate theory is used as a mixed mode fracture criterion for co-planar crack extension. Out-of-plane crack extension criteria are approximated by a simple equation with a semi-empirical constant that can model the maximum tangential stress theory, the minimum strain energy density criterion, the maximum strain energy release rate theory, or experimental results. For comparison, Griffith's maximum tensile stress theory, the principle of independent action, and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. A more limited program, CARES/PC (COSMIC number LEW-15248) runs on a personal computer and estimates ceramic material properties from three-point bend bar data. CARES/PC does not perform fast fracture reliability estimation. CARES is written in FORTRAN 77 and has been implemented on DEC VAX series computers under VMS and on IBM 370 series computers under VM/CMS. On a VAX, CARES requires 10Mb of main memory. Five MSC/NASTRAN example problems and two ANSYS example problems are provided. There are two versions of CARES supplied on the distribution tape, CARES1 and CARES2. CARES2 contains sub-elements and CARES1 does not. CARES is available on a 9-track 1600 BPI VAX FILES-11 format magnetic tape (standard media) or in VAX BACKUP format on a TK50 tape cartridge. The program requires a FORTRAN 77 compiler and about 12Mb memory. CARES was developed in 1990. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. IBM 370 is a trademark of International Business Machines. MSC/NASTRAN is a trademark of MacNeal-Schwendler Corporation. ANSYS is a trademark of Swanson Analysis Systems, Inc.

5. Bayesian networks inference algorithm to implement Dempster Shafer theory in reliability analysis

International Nuclear Information System (INIS)

This paper deals with the use of Bayesian networks to compute system reliability. The reliability analysis problem is described and the usual methods for quantitative reliability analysis are presented within a case study. Some drawbacks that justify the use of Bayesian networks are identified. The basic concepts of the Bayesian networks application to reliability analysis are introduced and a model to compute the reliability for the case study is presented. Dempster Shafer theory to treat epistemic uncertainty in reliability analysis is then discussed and its basic concepts that can be applied thanks to the Bayesian network inference algorithm are introduced. Finally, it is shown, with a numerical example, how Bayesian networks' inference algorithms compute complex system reliability and what the Dempster Shafer theory can provide to reliability analysis

6. Productivity enhancement and reliability through AutoAnalysis

Science.gov (United States)

Garetto, Anthony; Rademacher, Thomas; Schulz, Kristian

2015-09-01

The decreasing size and increasing complexity of photomask features, driven by the push to ever smaller technology nodes, places more and more challenges on the mask house, particularly in terms of yield management and cost reduction. Particularly challenging for mask shops is the inspection, repair and review cycle which requires more time and skill from operators due to the higher number of masks required per technology node and larger nuisance defect counts. While the measurement throughput of the AIMS™ platform has been improved in order to keep pace with these trends, the analysis of aerial images has seen little advancement and remains largely a manual process. This manual analysis of aerial images is time consuming, dependent on the skill level of the operator and significantly contributes to the overall mask manufacturing process flow. AutoAnalysis, the first application available for the FAVOR® platform, offers a solution to these problems by providing fully automated analysis of AIMS™ aerial images. Direct communication with the AIMS™ system allows automated data transfer and analysis parallel to the measurements. User defined report templates allow the relevant data to be output in a manner that can be tailored to various internal needs and support the requests of your customers. Productivity is significantly improved due to the fast analysis, operator time is saved and made available for other tasks and reliability is no longer a concern as the most defective region is always and consistently captured. In this paper the concept and approach of AutoAnalysis will be presented as well as an update to the status of the project. The benefits arising from the use of AutoAnalysis will be discussed in more detail and a study will be performed in order to demonstrate.

7. Reliability Engineering

International Nuclear Information System (INIS)

This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

8. Flow cytometry reliability analysis and variations in sugarcane DNA content.

Science.gov (United States)

Oliveira, A C L; Pasqual, M; Bruzi, A T; Pio, L A S; Mendonça, P M S; Soares, J D R

2015-01-01

The aim of this study was to evaluate the reliability of flow cytometry analysis and the use of this technique to differentiate species and varieties of sugarcane (Saccharum spp) according to their relative DNA content. We analyzed 16 varieties and three species belonging to this genus. To determine a reliable protocol, we evaluated three extraction buffers (LB01, Marie, and Tris·MgCl2), the presence and absence of RNase, six doses of propidium iodide (10, 15, 20, 25, and 30 ?g), four periods of exposure to propidium iodide (0, 5, 10, and 20 min), and seven external reference standards (peas, beans, corn, radish, rye, soybean, and tomato) with reference to the coefficient of variation and the DNA content. For statistical analyses, we used the programs Sisvar(®) and Xlstat(®). We recommend using the Marie extraction buffer and at least 15 ?g propidium iodide. The samples should not be analyzed immediately after the addition of propidium iodide. The use of RNase is optional, and tomato should be used as an external reference standard. The results show that sugarcane has a variable genome size (8.42 to 12.12 pg/2C) and the individuals analyzed could be separated into four groups according to their DNA content with relative equality in the genome sizes of the commercial varieties. PMID:26125928

9. Application of human reliability analysis methodology of second generation

International Nuclear Information System (INIS)

The human reliability analysis (HRA) is a very important part of probabilistic safety analysis. The main contribution of HRA in nuclear power plants is the identification and characterization of the issues that are brought together for an error occurring in the human tasks that occur under normal operation conditions and those made after abnormal event. Additionally, the analysis of various accidents in history, it was found that the human component has been a contributing factor in the cause. Because of need to understand the forms and probability of human error in the 60 decade begins with the collection of generic data that result in the development of the first generation of HRA methodologies. Subsequently develop methods to include in their models additional performance shaping factors and the interaction between them. So by the 90 mid, comes what is considered the second generation methodologies. Among these is the methodology A Technique for Human Event Analysis (ATHEANA). The application of this method in a generic human failure event, it is interesting because it includes in its modeling commission error, the additional deviations quantification to nominal scenario considered in the accident sequence of probabilistic safety analysis and, for this event the dependency actions evaluation. That is, the generic human failure event was required first independent evaluation of the two related human failure events . So the gathering of the new human error probabilities involves the nominal scenario quantification and cases of significant deviations considered by the potential impact on analyzed human failure events. Like probabilistic safety analysis, with the analysis of the sequences were extracted factors more specific with the highest contribution in the human error probabilities. (Author)

10. Wind energy Computerized Maintenance Management System (CMMS) : data collection recommendations for reliability analysis.

Energy Technology Data Exchange (ETDEWEB)

Peters, Valerie A.; Ogilvie, Alistair; Veers, Paul S.

2009-09-01

This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a list of the data needed to support reliability and availability analysis, and gives specific recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of fielded wind turbines. This report is intended to help the reader develop a basic understanding of what data are needed from a Computerized Maintenance Management System (CMMS) and other data systems, for reliability analysis. The report provides: (1) a list of the data needed to support reliability and availability analysis; and (2) specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and a wider variety of analysis and reporting needs.

11. Towards a non-wired simulator for reliability analysis

International Nuclear Information System (INIS)

This paper outlines the objectives and preliminary results of a research programme aiming to increase the advantages of electronic simulators used for reliability studies of complex systems. Research work has resulted in the design of a device based on an electronic simulator capable of carrying out all types of simulation without the drawback of wiring, as is currently the case. Its performance levels as regards speed are comparable to those of wired simulators and this is its main advantage over studies made on a computer. In addition, the simulator is connected to a computer which greatly increases system flexibility and user-friendliness. The first results obtained illustrate what characteristics can be expected of such a system, both as regards the anticipated computation time and the extended processing capabilities (such as the study of common cause failures). (author)

12. A Monte Carlo simulation method for system reliability analysis

International Nuclear Information System (INIS)

Bases of Monte Carlo simulation are briefly described. Details of the application of Excel software to Monte Carlo simulation are shown with an analysis example. Three-component system is taken up and analysis is performed with the consideration of repair actions. Finally, it is shown that loop structure can be solved by Monte Carlo simulation method, which is realized by Excel software. The simulation results are compared with the analytical calculation results and good agreement is confirmed. (author)

13. Reliability analysis of the service water system of Angra 1 reactor

International Nuclear Information System (INIS)

A reliability analysis of the service water system is done aiming to use in the evaluation of the non reliability of the Component Cooling System (SRC) for great loss of cooling accidents in nuclear power plants. (E.G.)

14. Reliability analysis of the service water system of Angra 1 reactor

International Nuclear Information System (INIS)

A reliability analysis of the service water system is done aiming to use in the evaluation of the non reliability of the component cooling system (SRC) for great loss of cooling accidents in nuclear power plants. (E.G.)

15. An Application of Graph Theory in Markov Chains Reliability Analysis

Directory of Open Access Journals (Sweden)

Pavel Skalny

2014-01-01

Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.

16. Time-dependent reliability analysis and condition assessment of structures

Energy Technology Data Exchange (ETDEWEB)

Ellingwood, B.R. [Johns Hopkins Univ., Baltimore, MD (United States)

1997-01-01

Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process.

17. Steps of the reliability analysis of NPP-piping

International Nuclear Information System (INIS)

The various steps of the reliability analysis of nuclear power plant piping are: definition and classification of safety-related leakages, determination of damage mechanism, definition of leak classes, subdivision of the system, definition of relevant elements with respect to the damage mechanisms acting and their population by using general as well as special operating experience and by differentiating in pipe elements and connections, determination of the plants and systems which are relevant for the evaluation of operating experience, determination of leak areas and their frequencies by referring to the leak-related locations, and determination of the frequency for different leak areas in the systems under investigation. Examples are given. 4 figs., 3 tabs

18. Time-dependent reliability analysis and condition assessment of structures

International Nuclear Information System (INIS)

Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process

19. Reliability Analysis For Substation Employing B. F. Technique

OpenAIRE

MOHIT KUMAR, RAM AVTAR JASWAL

2013-01-01

In this paper, the object is to improve the reliability and overall performances of Rice Mill. When a more complexities increase in the system, the reliability evaluations become more difficult. Therefore, the derivation of symbolic reliability expression is simplified and a general system in compact form is helpful. Before this, the techniques executed earlier to solve such reliability models are very time consuming and very tedious calculations. Therefore, in this study Boolean function tec...

20. A Fault Analysis based Model for Software Reliability Estimation

OpenAIRE

Garima Chawla,; Santosh Kr Thakur,

2013-01-01

When a software system is designed, the major concern is the software quality. The quality of software depends on different factors such as software reliability, efficiency, cost etc. In this paper, we have defined the software reliability as the measure of software quality. There are different available models that estimate the reliability of software based on type of faults, fault density etc. In this paper, a study on different aspects related to software reliability are discussed..

1. Reliability analysis of diesel generators of Wolsung Unit 1

International Nuclear Information System (INIS)

As a maintenance optimization project to improve the safety of Wolsung NPP (Nuclear Power Plant), reliability of diesel generators are estimated based on the operating experience, and improvement options are suggested. A reliability measure is suggested for the estimation of reliability for standby safety systems to reflect availability. It is assessed that the reliability of diesel generators can be much improved if the suggested improvement options are implemented

2. Suitability review of FMEA and reliability analysis for digital plant protection system and digital engineered safety features actuation system

Energy Technology Data Exchange (ETDEWEB)

Kim, I. S.; Kim, T. K.; Kim, M. C.; Kim, B. S.; Hwang, S. W.; Ryu, K. C. [Hanyang Univ., Seoul (Korea, Republic of)

2000-11-15

Of the many items that should be checked out during a review stage of the licensing application for the I and C system of Ulchin 5 and 6 units, this report relates to a suitability review of the reliability analysis of Digital Plant Protection System (DPPS) and Digital Engineered Safety Features Actuation System (DESFAS). In the reliability analysis performed by the system designer, ABB-CE, fault tree analysis was used as the main methods along with Failure Modes and Effect Analysis (FMEA). However, the present regulatory technique dose not allow the system reliability analysis and its results to be appropriately evaluated. Hence, this study was carried out focusing on the following four items ; development of general review items by which to check the validity of a reliability analysis, and the subsequent review of suitability of the reliability analysis for Ulchin 5 and 6 DPPS and DESFAS L development of detailed review items by which to check the validity of an FMEA, and the subsequent review of suitability of the FMEA for Ulchin 5 and 6 DPPS and DESFAS ; development of detailed review items by which to check the validity of a fault tree analysis, and the subsequent review of suitability of the fault tree for Ulchin 5 and 6 DPPS and DESFAS ; an integrated review of the safety and reliability of the Ulchin 5 and 6 DPPS and DESFAS based on the results of the various reviews above and also of a reliability comparison between the digital systems and the comparable analog systems, i.e., and analog Plant Protection System (PPS) and and analog Engineered Safety Features Actuation System (ESFAS). According to the review mentioned above, the reliability analysis of Ulchin 5 and 6 DPPS and DESFAS generally satisfies the review requirements. However, some shortcomings of the analysis were identified in our review such that the assumed test periods for several equipment were not properly incorporated in the analysis, and failures of some equipment were not included in the fault tree, Based on these findings, the ABB-CE is reanalyzing the system unavailabilities by modifying the fault trees. Hence, the reliability of the digital systems shall have to be reevaluated in an integrated manner following the re-analysis of the fault trees. In conclusion, the generic review method of systems reliability analysis, developed in this study, shows a great potential for use in evaluating reliability analysis of other safety systems.

3. The Validity and Reliability of a Procedure for Competition Analysis in Swimming Based on Individual Distance Measurements

OpenAIRE

Veiga Fernandez, Santiago; Cala Mejías, Antonio; González Frutos, Cabello; Navarro Cabello, Enrique

2010-01-01

In swimming, competition analyses have been frequently performed according to three segments of the nee, equal for all competitors. However, individual distance measurements during start and turn race segments have been scarcely assessed. The aim of the present study was: 1) to verify the validity and reliability of a 2D-DLT based system for competition analysis in swimming and, 2) to compare it with die commonly used technique. Higher values of accuracy (RMSE=0.05 m) and reliability (CV

4. Reliability Analysis of Bearing Capacity of Large-Diameter Piles under Osterberg Test

OpenAIRE

Lei Nie; Yuan Guo; Lina Xu

2013-01-01

This study gives the reliability analysis of bearing capacity of large-diameter piles under osterberg test. The limit state equation of dimensionless random variables is utilized in the reliability analysis of vertical bearing capacity of large-diameter piles based on Osterberg loading tests. And the reliability index and the resistance partial coefficient under the current specifications are calculated using calibration method. The results show: the reliable index of large-diameter piles is ...

5. Models and data requirements for human reliability analysis

International Nuclear Information System (INIS)

It has been widely recognised for many years that the safety of the nuclear power generation depends heavily on the human factors related to plant operation. This has been confirmed by the accidents at Three Mile Island and Chernobyl. Both these cases revealed how human actions can defeat engineered safeguards and the need for special operator training to cover the possibility of unexpected plant conditions. The importance of the human factor also stands out in the analysis of abnormal events and insights from probabilistic safety assessments (PSA's), which reveal a large proportion of cases having their origin in faulty operator performance. A consultants' meeting, organized jointly by the International Atomic Energy Agency (IAEA) and the International Institute for Applied Systems Analysis (IIASA) was held at IIASA in Laxenburg, Austria, December 7-11, 1987, with the aim of reviewing existing models used in Probabilistic Safety Assessment (PSA) for Human Reliability Analysis (HRA) and of identifying the data required. The report collects both the contributions offered by the members of the Expert Task Force and the findings of the extensive discussions that took place during the meeting. Refs, figs and tabs

6. Brain Tumor Segmentation: A Comparative Analysis

OpenAIRE

2015-01-01

Five different threshold segmentation based approaches have been reviewed and compared over here to extract the tumor from set of brain images. This research focuses on the analysis of image segmentation methods, a comparison of five semi-automated methods have been undertaken for evaluating their relative performance in the segmentation of tumor. Consequently, results are compared on the basis of quantitative and qualitative analysis of respective methods. The purpose of th...

7. A Comparative Study on Error Analysis

DEFF Research Database (Denmark)

Wu, Xiaoli; Zhang, Chun

2015-01-01

Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made...

8. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

International Nuclear Information System (INIS)

This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study

9. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

Energy Technology Data Exchange (ETDEWEB)

Bell, B.J.; Swain, A.D.

1983-05-01

This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study.

10. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

Science.gov (United States)

Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.

2010-01-01

Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.

11. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

International Nuclear Information System (INIS)

In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

12. Comparing Between Maximum Likelihood and Least Square Estimators for Gompertz Software Reliability Model

OpenAIRE

Lutfiah Ismail Al turk

2014-01-01

Software reliability models (SRMs) are very important for estimating and predicting software reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a historical review of the Gompertz SRM is given. Based on several software failure data, the parameters of the Gompertz software reliability model are estimated using two estimation methods, the traditional maximum likelihood and the least square. The methods of estimation are evaluated u...

13. The reliability of mercury analysis in environmental materials

International Nuclear Information System (INIS)

Mercury occurs in nature in its native elemental as well as in different mineral forms. It has been mined for centuries and is used in many branches of industry, agriculture and medicine. Mercury is very toxic to man and reports of poisoning due to the presence of the element in fish and shellfish caught at Minamata and Niigata, Japan have led not only to local investigations but to multi-national research into the sources and the levels of mercury in the environment. The concentrations at which the element has to be determined in these studies are extremely small, usually of the order of a few parts in 109 parts of environmental material. Few analytical techniques provide the required sensitivity for analysis at such low concentrations, and only two are normally used for mercury: neutron activation analysis and atomic absorption photometry. They are also the most convenient end points of various separation schemes for different organic mercury compounds. Mercury analysis at the ppb-level is beset with many problems: volatility of the metal and its compounds, impurity of reagents, interference by other elements and many other analytical difficulties may influence the results. To be able to draw valid conclusions from the analyses it is necessary to know the reliability attached to the values obtained. To assist laboratories in the evaluation of their analytical performance, the International Atomic Energy Agency through its own laboratory at Seibersdorf already organised in 1967 an intercomparison of mercury analysis in flour. Based on the results obtained at that time, a whole series of intercomparisons of mercury determinations in nine different environmental materials was undertaken in 1971. The materials investigated included corn and wheat flour, spray-dried animal blood serum, fish solubles, milk powder, saw dust, cellulose, lacquer paint and coloric material

14. Failure Analysis towards Reliable Performance of Aero-Engines

Directory of Open Access Journals (Sweden)

T. Jayakumar

1999-10-01

15. Reliability and risk analysis data base development: an historical perspective

International Nuclear Information System (INIS)

Collection of empirical data and data base development for use in the prediction of the probability of future events has a long history. Dating back at least to the 17th century, safe passage events and mortality events were collected and analyzed to uncover prospective underlying classes and associated class attributes. Tabulations of these developed classes and associated attributes formed the underwriting basis for the fledgling insurance industry. Much earlier, master masons and architects used design rules of thumb to capture the experience of the ages and thereby produce structures of incredible longevity and reliability (Antona, E., Fragola, J. and Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings, Rome, Italy, 18-20 October 1993). These rules served so well in producing robust designs that it was not until almost the 19th century that the analysis (Charlton, T.M., A History Of Theory Of Structures In The 19th Century, Cambridge University Press, Cambridge, UK, 1982) of masonry voussoir arches, begun by Galileo some two centuries earlier (Galilei, G. Discorsi e dimostrazioni mathematiche intorno a due nuove science, (Discourses and mathematical demonstrations concerning two new sciences, Leiden, The Netherlands, 1638), was placed on a sound scientific basis. Still, with the introduction of new materials (such as wrought iron and steel) and the lack of theoretical knowledge and computational facilities, approximate methods of structural design abounded well into the second half of the 20th century. To this day structural designers account for material variations and gaps in theoretical knowledge by employing factors of safety (Benvenuto, E., An Introduction to the History of Structural Mechanics, Part II: Vaulted Structures and Elastic Systems, Springer-Verlag, NY, 1991) or codes of practice (ASME Boiler and Pressure Vessel Code, ASME, New York) originally developed in the 19th century (Antona, E., Fragola, J. and Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings, Rome, Italy, 18-20 October 1993). These factors, although they continue to be heuristically based, attempt to account for uncertainties in the design environment (e.g., the load spectra) and residual materials defects (Fragola, J.R. et al., Investigation of the risk implications of space shuttle solid rocket booster chamber pressure excursions. SAIC Document No. SAIC/NY 95-01-10, New York, NY). Although the approaches may appear different, at least at first glance, the intention in both the insurance and design arenas was to establish an 'infrastructure of confidence' to enable rational decision making for future endeavours. Maturity in the design process of conventional structures such as bridges, buildings, boilers, and highways has led to the loss of recognition of the role that robustness plays in these designs to qualify them against their normal failure environment. So routinely do we expect these designs to survive that we tend to think of the individual failures (which do occur on occasion) as isolated 'freak' accidents. Attempts to uncover potential underlying classes and document associated attributes are rare, and even when they are undertaken 'human error' or 'one-of-a-kind accidents' is often cited as the major cause which somehow seems to absolve the analyst from the responsibility of further data collection (Levy, M. and Salvadori, M., Why Buildings Fall Down, W.W. Norton and Co., New York, NY, 1992; Pecht, M., Nash, F.R. and Long, J.H., Understanding and solving the real reliability assurance problems. 1995 Proceedings of Annual RAMS Symposium, IEEE, New York, NY, 1995). The confusion has proliferated to the point where legitimate calls for scepticism regarding the scant data resources available (Evans, R.A., Bayes paradox. IEEE Trans. Reliab., R-31 (1982) 321) have given way to cries that some data sources be abandoned altogether (Cushing, M. et al., Comparison of electronics-reliability assessment approaches. Trans. Reliab., 42 (1993) 542-546 Wat son,

16. Reliability analysis based on losses from failure Modelling

OpenAIRE

Dr. Amit Gupta , Renu Garg

2013-01-01

As the cost of software application failures grows andas these failures increasingly impact business performance,software reliability will become progressively more important.Employing effective software reliability engineering techniquesto improve product and process reliability would be theindustry’s best interests as well as major challenges. As softwarecomplexity and software quality are highly related to softwarereliability, the measurements of software complexity and qualityattributes h...

17. The reliability analysis of cutting tools in the HSM processes

OpenAIRE

Lin, W S

2008-01-01

Purpose: This article mainly describe the reliability of the cutting tools in the high speed turning by normaldistribution model.Design/methodology/approach: A series of experimental tests have been done to evaluate the reliabilityvariation of the cutting tools. From experimental results, the tool wear distribution and the tool life are determined,and the tool life distribution and the reliability function of cutting tools are derived. Further, the reliability ofcutting tools at anytime for h...

18. Evaluation of ATHEANA methodology a second generation human reliability analysis

International Nuclear Information System (INIS)

Incidents and accidents at nuclear power plants (NPP) have been and always will be considered as undesired occurrences. Human error (REASON, 1990) is probably the major contributor to serious accidents as those that occurred at Three Mile Island NPP, Unit 2 (TMI-2), in 1979, and Chernobyl, Unit 4, in 1986, and (AEOD/E95-01, 1995 at others NPPs. Reviews and analysis of those accidents and others near-misses have shown operators performing actions that are not required for the accident response and, in fact, worsen the plant's condition. This action, where a person does something that it's not supposed to do, believing that it was the right thing to do, resulting in changes in the plant that may be worse than if he had done nothing, is called Error of Commission (EOC). These inappropriate actions are rightly affected (NUREG-1624, Rev.1, 2000) by the off-normal context (i.e., the combination of plant conditions and performance shaping factors) of the event scenario that virtually forces the operator to fail. Considering that this kind of human intervention could be an important failure mode and precursor to more serious events, aggravated by the fact that actual probabilistic risk assessment (PRA), does not consider this kind of error, a new methodology (NUREG-1624, Rev.1, 2000) was developed of Human Reliability Analysis (HRA), called 'A Technique for Human Event Analysis' (ATHEANA). ATHEANA is a multidisciplinary second-generation HRA method and provides an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant accidents. This paper presents this new methodology in order to identified its weak and strong points, to evaluate its advantages and disadvantages and its benefits to safety, and finally, to analyze its application to Angra NPP. (author)

19. Psychometric Inferences from a Meta-Analysis of Reliability and Internal Consistency Coefficients

Science.gov (United States)

Botella, Juan; Suero, Manuel; Gambara, Hilda

2010-01-01

A meta-analysis of the reliability of the scores from a specific test, also called reliability generalization, allows the quantitative synthesis of its properties from a set of studies. It is usually assumed that part of the variation in the reliability coefficients is due to some unknown and implicit mechanism that restricts and biases the…

20. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

International Nuclear Information System (INIS)

High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

1. Human reliability analysis of the Tehran research reactor using the SPAR-H method

OpenAIRE

Barati Ramin; Setayeshi Saeed

2012-01-01

The purpose of this paper is to cover human reliability analysis of the Tehran research reactor using an appropriate method for the representation of human failure probabilities. In the present work, the technique for human error rate prediction and standardized plant analysis risk-human reliability methods have been utilized to quantify different categories of human errors, applied extensively to nuclear power plants. Human reliability analysis is, indeed, an integral and significant p...

2. Comparative analysis of radionuclide inventory in sediment 1995

International Nuclear Information System (INIS)

In order to test the reliability of methods used in environmental monitoring for radioactive substances, the Bundesanstalt fuer Gewaesserkunde in 1995 again carried out a comparative analysis ''Radionuclides in sediment'' with correspondingly labelled or conditioned samples. The primary aim of this project - independently of the method used in each instance and the measuring conditions observed - was to establish the extent to which the measuring results of the individual participants deviate from specified supposed values or likeliest contents, and also to valuate these deviations by means of illustrative quality parameters. In so far the aim of this comparative analysis differs from that of a so-called inter-laboratory experiment, where the primary objective is to obtain characteristic data for an analytical method (orig./SR)

3. Reliability Analysis and Modeling of ZigBee Networks

Science.gov (United States)

Lin, Cheng-Min

The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

4. Stochastic Petri nets for the reliability analysis of communication network applications with alternate-routing

International Nuclear Information System (INIS)

In this paper, we present a comparative reliability analysis of an application on a corporate B-ISDN network under various alternate-routing protocols. For simple cases, the reliability problem can be cast into fault-tree models and solved rapidly by means of known methods. For more complex scenarios, state space (Markov) models are required. However, generation of large state space models can get very labor intensive and error prone. We advocate the use of stochastic reward nets (a variant of stochastic Petri nets) for the concise specification, automated generation and solution of alternate-routing protocols in networks. This paper is written in a tutorial style so as to make it accessible to a large audience

5. RADYBAN: A tool for reliability analysis of dynamic fault trees through conversion into dynamic Bayesian networks

International Nuclear Information System (INIS)

In this paper, we present RADYBAN (Reliability Analysis with DYnamic BAyesian Networks), a software tool which allows to analyze a dynamic fault tree relying on its conversion into a dynamic Bayesian network. The tool implements a modular algorithm for automatically translating a dynamic fault tree into the corresponding dynamic Bayesian network and exploits classical algorithms for the inference on dynamic Bayesian networks, in order to compute reliability measures. After having described the basic features of the tool, we show how it operates on a real world example and we compare the unreliability results it generates with those returned by other methodologies, in order to verify the correctness and the consistency of the results obtained

6. Method of reliability allocation based on fault tree analysis and fuzzy math in nuclear power plants

International Nuclear Information System (INIS)

Reliability allocation is a kind of a difficult multi-objective optimization problem. It can not only be applied to determine the reliability characteristic of reactor systems, subsystem and main components but also be performed to improve the design, operation and maintenance of nuclear plants. The fuzzy math known as one of the powerful tools for fuzzy optimization and the fault analysis deemed to be one of the effective methods of reliability analysis can be applied to the reliability allocation model so as to work out the problems of fuzzy characteristic of some factors and subsystem's choice respectively in this paper. Thus we develop a failure rate allocation model on the basis of the fault tree analysis and fuzzy math. For the choice of the reliability constraint factors, we choose the six important ones according to practical need for conducting the reliability allocation. The subsystem selected by the top-level fault tree analysis is to avoid allocating reliability for all the equipment and components including the unnecessary parts. During the reliability process, some factors can be calculated or measured quantitatively while others only can be assessed qualitatively by the expert rating method. So we adopt fuzzy decision and dualistic contrast to realize the reliability allocation with the help of fault tree analysis. Finally the example of the emergency diesel generator's reliability allocation is used to illustrate reliability allocation model and improve this model simple and applicable. (authors)

7. Probabilistic Analysis of Aircraft Gas Turbine Disk Life and Reliability

Science.gov (United States)

Melis, Matthew E.; Zaretsky, Erwin V.; August, Richard

1999-01-01

Two series of low cycle fatigue (LCF) test data for two groups of different aircraft gas turbine engine compressor disk geometries were reanalyzed and compared using Weibull statistics. Both groups of disks were manufactured from titanium (Ti-6Al-4V) alloy. A NASA Glenn Research Center developed probabilistic computer code Probable Cause was used to predict disk life and reliability. A material-life factor A was determined for titanium (Ti-6Al-4V) alloy based upon fatigue disk data and successfully applied to predict the life of the disks as a function of speed. A comparison was made with the currently used life prediction method based upon crack growth rate. Applying an endurance limit to the computer code did not significantly affect the predicted lives under engine operating conditions. Failure location prediction correlates with those experimentally observed in the LCF tests. A reasonable correlation was obtained between the predicted disk lives using the Probable Cause code and a modified crack growth method for life prediction. Both methods slightly overpredict life for one disk group and significantly under predict it for the other.

8. Application of Metric-based Software Reliability Analysis to Example Software

International Nuclear Information System (INIS)

The software reliability of TELLERFAST ATM software is analyzed by using two metric-based software reliability analysis methods, a state transition diagram-based method and a test coverage-based method. The procedures for the software reliability analysis by using the two methods and the analysis results are provided in this report. It is found that the two methods have a relation of complementary cooperation, and therefore further researches on combining the two methods to reflect the benefit of the complementary cooperative effect to the software reliability analysis are recommended

9. Operator reliability analysis during NPP small break LOCA

International Nuclear Information System (INIS)

To assess the human factor characteristic of a NPP main control room (MCR) design, the MCR operator reliability during a small break LOCA is analyzed, and some approaches for improving the MCR operator reliability are proposed based on the analyzing results

10. A comparative analysis of aircraft noise performances

OpenAIRE

Nicola Gualandi; Luca Mantecchini

2009-01-01

This paper presents a comparative analysis of aircraft acoustical performance based on the definition of a noise performance indicator called ENSA (equivalent number of standard aircraft). ENSA methodology is based on the choice of a standard aircraft, then ENSA’s values are obtained by comparing the generic aircraft’s performances with the standard aircraft’s performances. The performance evaluation is performed by analysing for each aircraft the equivalent number of standard aircrafts movem...

11. Comparative analysis of Carnaval II Library

International Nuclear Information System (INIS)

The Carnaval II cross sections library from the french fast reactor calculation system is evaluated in two ways: 10) a comparative analysis of the calculations system for fast reactors at IEN (Instituto de Engenharia Nuclear) using a 'benchmark' model is done; 20) a comparative analysis in relation to the french system itself is also done, using calculations realized with two versions of the french library: the SETR-II and the CARNAVAL IV, the first one being anterior and the second one posterior to the Carnaval II version, the one used by IEN. (Author)

12. Wind turbine reliability : a database and analysis approach.

Energy Technology Data Exchange (ETDEWEB)

Linsday, James (ARES Corporation); Briand, Daniel; Hill, Roger Ray; Stinebaugh, Jennifer A.; Benjamin, Allan S. (ARES Corporation)

2008-02-01

The US wind Industry has experienced remarkable growth since the turn of the century. At the same time, the physical size and electrical generation capabilities of wind turbines has also experienced remarkable growth. As the market continues to expand, and as wind generation continues to gain a significant share of the generation portfolio, the reliability of wind turbine technology becomes increasingly important. This report addresses how operations and maintenance costs are related to unreliability - that is the failures experienced by systems and components. Reliability tools are demonstrated, data needed to understand and catalog failure events is described, and practical wind turbine reliability models are illustrated, including preliminary results. This report also presents a continuing process of how to proceed with controlling industry requirements, needs, and expectations related to Reliability, Availability, Maintainability, and Safety. A simply stated goal of this process is to better understand and to improve the operable reliability of wind turbine installations.

13. Comparative analysis of methods of hardness assessment

OpenAIRE

A. Czarski

2009-01-01

Purpose: The aim of this paper is to show how it could utilize the statistical methods for the process management.Design/methodology/approach: The research methodology bases on a theoretical analysis and empirical researches. A practical solution is presented to compare measurements methods of hardness and to estimate capability indices of measurement system.Findings: Measurement system analysis (MSA), particularly theory of statistical tests brings correct results for the analysed case.Resea...

14. An efficient phased mission reliability analysis for autonomous vehicles

International Nuclear Information System (INIS)

Autonomous systems are becoming more commonly used, especially in hazardous situations. Such systems are expected to make their own decisions about future actions when some capabilities degrade due to failures of their subsystems. Such decisions are made without human input, therefore they need to be well-informed in a short time when the situation is analysed and future consequences of the failure are estimated. The future planning of the mission should take account of the likelihood of mission failure. The reliability analysis for autonomous systems can be performed using the methodologies developed for phased mission analysis, where the causes of failure for each phase in the mission can be expressed by fault trees. Unmanned autonomous vehicles (UAVs) are of a particular interest in the aeronautical industry, where it is a long term ambition to operate them routinely in civil airspace. Safety is the main requirement for the UAV operation and the calculation of failure probability of each phase and the overall mission is the topic of this paper. When components or subsystems fail or environmental conditions throughout the mission change, these changes can affect the future mission. The new proposed methodology takes into account the available diagnostics data and is used to predict future capabilities of the UAV in real time. Since this methodology is based on the efficient BDD method, the quickly provided advice can be used in making decisions. When failures occur appropriate actions are required in order to preserve safety of the autonomous vehicle. The overall decision making strategy for autonomous vehicles is explained in this paper. Some limitations of the methodology are discussed and further improvements are presented based on experimental results.

Directory of Open Access Journals (Sweden)

2013-12-01

16. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

Science.gov (United States)

Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

1994-01-01

This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

17. Reliability engineering

International Nuclear Information System (INIS)

This book mentions importance and conception of reliability, product life cycle and reliability, historical background of reliability and application field of reliability. Next, it deals with basic probability distribution, criterion of reliability, function and system credulity, component importance, model in failure, analysis of failure, system analysis of repairable things, management of the best maintenance, analysis of life data and accelerated-life test. Every chapter has introduction and explanation of each theory.

18. Reliability Analysis of Distribution Automation on Different Feeders

Directory of Open Access Journals (Sweden)

V. Krishna Murthy

2011-12-01

Full Text Available Automating a distribution system is an effective means to provide a more reliable and economical system in the fast growing technological world. This paper delivers into automating a system using two- stage restoration (partial automation and put forward a feeder automation system based on substation automation platform that can be applied to electrical distribution systems for high economic-technical efficiency. Improved reliability is evaluated when feeder automation is applied to distribution. This paper studies three different feeders and decides on the most probable reliable feeder among them.

19. Reliability Block Diagram (RBD) Analysis of NASA Dryden Flight Research Center (DFRC) Flight Termination System and Power Supply

Science.gov (United States)

Morehouse, Dennis V.

2006-01-01

In order to perform public risk analyses for vehicles containing Flight Termination Systems (FTS), it is necessary for the analyst to know the reliability of each of the components of the FTS. These systems are typically divided into two segments; a transmitter system and associated equipment, typically in a ground station or on a support aircraft, and a receiver system and associated equipment on the target vehicle. This analysis attempts to analyze the reliability of the NASA DFRC flight termination system ground transmitter segment for use in the larger risk analysis and to compare the results against two established Department of Defense availability standards for such equipment.

20. Automated migration analysis based on cell texture: method & reliability

Directory of Open Access Journals (Sweden)

Chittenden Thomas W

2005-03-01

Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

1. Reliability analysis based on losses from failure Modelling

Directory of Open Access Journals (Sweden)

Dr. Amit Gupta , Renu Garg

2013-06-01

Full Text Available As the cost of software application failures grows andas these failures increasingly impact business performance,software reliability will become progressively more important.Employing effective software reliability engineering techniquesto improve product and process reliability would be theindustry’s best interests as well as major challenges. As softwarecomplexity and software quality are highly related to softwarereliability, the measurements of software complexity and qualityattributes have been explored for early prediction of softwarereliability. Static as well as dynamic program complexitymeasurements have been collected, such as lines of code, numberof operators, relative program complexity, functional complexity,operational complexity, and so on. The complexity metrics can befurther included in software reliability models for earlyreliability prediction, for example, to predict the initial softwarefault density and failure rate.

2. Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

OpenAIRE

Marko ?epin

2008-01-01

The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysi...

3. Evaluating Written Patient Information for Eczema in German: Comparing the Reliability of Two Instruments, DISCERN and EQIP

Science.gov (United States)

McCool, Megan E.; Wahl, Josepha; Schlecht, Inga; Apfelbacher, Christian

2015-01-01

Patients actively seek information about how to cope with their health problems, but the quality of the information available varies. A number of instruments have been developed to assess the quality of patient information, primarily though in English. Little is known about the reliability of these instruments when applied to patient information in German. The objective of our study was to investigate and compare the reliability of two validated instruments, DISCERN and EQIP, in order to determine which of these instruments is better suited for a further study pertaining to the quality of information available to German patients with eczema. Two independent raters evaluated a random sample of 20 informational brochures in German. All the brochures addressed eczema as a disorder and/or therapy options and care. Intra-rater and inter-rater reliability were assessed by calculating intra-class correlation coefficients, agreement was tested with weighted kappas, and the correlation of the raters’ scores for each instrument was measured with Pearson’s correlation coefficient. DISCERN demonstrated substantial intra- and inter-rater reliability. It also showed slightly better agreement than EQIP. There was a strong correlation of the raters’ scores for both instruments. The findings of this study support the reliability of both DISCERN and EQIP. However, based on the results of the inter-rater reliability, agreement and correlation analyses, we consider DISCERN to be the more precise tool for our project on patient information concerning the treatment and care of eczema. PMID:26440612

4. CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

Science.gov (United States)

Szatmary, S. A.

1994-01-01

The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided when the maximum likelihood technique is used. CARES/PC is written and compiled with the Microsoft FORTRAN v5.0 compiler using the VAX FORTRAN extensions and dynamic array allocation supported by this compiler for the IBM/MS-DOS or OS/2 operating systems. The dynamic array allocation routines allow the user to match the number of fracture sets and test specimens to the memory available. Machine requirements include IBM PC compatibles with optional math coprocessor. Program output is designed to fit 80-column format printers. Executables for both DOS and OS/2 are provided. CARES/PC is distributed on one 5.25 inch 360K MS-DOS format diskette in compressed format. The expansion tool PKUNZIP.EXE is supplied on the diskette. CARES/PC was developed in 1990. IBM PC and OS/2 are trademarks of International Business Machines. MS-DOS and MS OS/2 are trademarks of Microsoft Corporation. VAX is a trademark of Digital Equipment Corporation.

5. Reliability analysis of mechanical components containing random flaws

OpenAIRE

Iacopino, G.

2006-01-01

The goal of structural reliability is to assure that a structure adequately performs its intended function when operating under specified environmental conditions. The major source of unreliability is the variability that characterizes engineering structures subjected to inherent randomness in material properties, loading and geometrical parameters. A sensible approach to structural reliability must be able to evaluate and control the effects of this variability, quantifying th...

6. Reliability Analysis in Parallel and Distributed Systems with Network Contention

OpenAIRE

Hussein EL Ghor; Rafic Hage Chehade; Tamim Fliti

2012-01-01

This paper tackles the reliability problem of task allocation in heterogeneous distributed systems in the presence of network contention. A large number of scheduling heuristics has been presented in literature, but most of them target maximizing the system reliability without taking network contention delay into consideration. In this paper, we deal with a more realistic model for heterogeneous networks of workstations by taking network contention as an important factor in our study. Althoug...

7. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

Directory of Open Access Journals (Sweden)

Bruce Weaver

2014-09-01

Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

8. The Barthel Index: comparing inter-rater reliability between nurses and doctors in an older adult rehabilitation unit.

LENUS (Irish Health Repository)

Hartigan, Irene

2011-02-01

To ensure accuracy in recording the Barthel Index (BI) in older people, it is essential to determine who is best placed to administer the index. The aim of this study was to compare doctors\\' and nurses\\' reliability in scoring the BI.

9. Comparing Between Maximum Likelihood and Least Square Estimators for Gompertz Software Reliability Model

Directory of Open Access Journals (Sweden)

Lutfiah Ismail Al turk

2014-07-01

Full Text Available Software reliability models (SRMs are very important for estimating and predicting software reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a historical review of the Gompertz SRM is given. Based on several software failure data, the parameters of the Gompertz software reliability model are estimated using two estimation methods, the traditional maximum likelihood and the least square. The methods of estimation are evaluated using the MSE and R-squared criteria. The results show that the least square estimation is an attractive method in term of predictive performance and can be used when the maximum likelihood method fails to give good prediction results.

10. The European COPHES/DEMOCOPHES project : towards transnational comparability and reliability of human biomonitoring results

DEFF Research Database (Denmark)

Schindler, Birgit Karin; Esteban, Marta

2014-01-01

COPHES/DEMOCOPHES has its origins in the European Environment and Health Action Plan of 2004 to "develop a coherent approach on human biomonitoring (HBM) in Europe". Within this twin-project it was targeted to collect specimens from 120 mother-child-pairs in each of the 17 participating European countries. These specimens were investigated for six biomarkers (mercury in hair; creatinine, cotinine, cadmium, phthalate metabolites and bisphenol A in urine). The results for mercury in hair are described in a separate paper. Each participating member state was requested to contract laboratories, for capacity building reasons ideally within its borders, carrying out the chemical analyses. To ensure comparability of analytical data a Quality Assurance Unit (QAU) was established which provided the participating laboratories with standard operating procedures (SOP) and with control material. This material was specially prepared from native, non-spiked, pooled urine samples and was tested for homogeneity and stability.Four external quality assessment exercises were carried out. Highly esteemed laboratories from all over the world served as reference laboratories. Web conferences after each external quality assessment exercise functioned as a new and effective tool to improve analytical performance, to build capacity and to educate less experienced laboratories. Of the 38 laboratories participating in the quality assurance exercises 14 laboratories qualified for cadmium, 14 for creatinine, 9 for cotinine, 7 for phthalate metabolites and 5 for bisphenol A in urine. In the last of the four external quality assessment exercises the laboratories that qualified for DEMOCOPHES performed the determinations in urine with relative standard deviations (low/high concentration) of 18.0/2.1% for cotinine, 14.8/5.1% for cadmium, 4.7/3.4% for creatinine. Relative standard deviations for the newly emerging biomarkers were higher, with values between 13.5 and 20.5% for bisphenol A and between 18.9 and 45.3% for the phthalate metabolites. Plausibility control of the HBM results of all participating countries disclosed analytical shortcomings in the determination of Cd when using certain ICP/MS methods. Results were corrected by reanalyzes. The COPHES/DEMOCOPHES project for the first time succeeded in performing a harmonized pan-European HBM project. All data raised have to be regarded as utmost reliable according to the highest international state of the art, since highly renowned laboratories functioned as reference laboratories. The procedure described here, that has shown its success, can be used as a blueprint for future transnational, multicentre HBM projects.

11. The European COPHES/DEMOCOPHES project : Towards transnational comparability and reliability of human biomonitoring results

DEFF Research Database (Denmark)

Schindler, Birgit Karin; Esteban, Marta

2014-01-01

COPHES/DEMOCOPHES has its origins in the European Environment and Health Action Plan of 2004 to "develop a coherent approach on human biomonitoring (HBM) in Europe". Within this twin-project it was targeted to collect specimens from 120 mother-child-pairs in each of the 17 participating European countries. These specimens were investigated for six biomarkers (mercury in hair; creatinine, cotinine, cadmium, phthalate metabolites and bisphenol A in urine). The results for mercury in hair are described in a separate paper. Each participating member state was requested to contract laboratories, for capacity building reasons ideally within its borders, carrying out the chemical analyses. To ensure comparability of analytical data a Quality Assurance Unit (QAU) was established which provided the participating laboratories with standard operating procedures (SOP) and with control material. This material was specially prepared from native, non-spiked, pooled urine samples and was tested for homogeneity and stability. Four external quality assessment exercises were carried out. Highly esteemed laboratories from all over the world served as reference laboratories. Web conferences after each external quality assessment exercise functioned as a new and effective tool to improve analytical performance, to build capacity and to educate less experienced laboratories. Of the 38 laboratories participating in the quality assurance exercises 14 laboratories qualified for cadmium, 14 for creatinine, 9 for cotinine, 7 for phthalate metabolites and 5 for bisphenol A in urine. In the last of the four external quality assessment exercises the laboratories that qualified for DEMOCOPHES performed the determinations in urine with relative standard deviations (low/high concentration) of 18.0/2.1% for cotinine, 14.8/5.1% for cadmium, 4.7/3.4% for creatinine. Relative standard deviations for the newly emerging biomarkers were higher, with values between 13.5 and 20.5% for bisphenol A and between 18.9 and 45.3% for the phthalate metabolites. Plausibility control of the HBM results of all participating countries disclosed analytical shortcomings in the determination of Cd when using certain ICP/MS methods. Results were corrected by reanalyzes. The COPHES/DEMOCOPHES project for the first time succeeded in performing a harmonized pan-European HBM project. All data raised have to be regarded as utmost reliable according to the highest international state of the art, since highly renowned laboratories functioned as reference laboratories. The procedure described here, that has shown its success, can be used as a blueprint for future transnational, multicentre HBM projects.

12. Mapping Green Spaces in Bishkek—How Reliable can Spatial Analysis Be?

Directory of Open Access Journals (Sweden)

Peter Hofmann

2011-05-01

Full Text Available Within urban areas, green spaces play a critically important role in the quality of life. They have remarkable impact on the local microclimate and the regional climate of the city. Quantifying the ‘greenness’ of urban areas allows comparing urban areas at several levels, as well as monitoring the evolution of green spaces in urban areas, thus serving as a tool for urban and developmental planning. Different categories of vegetation have different impacts on recreation potential and microclimate, as well as on the individual perception of green spaces. However, when quantifying the ‘greenness’ of urban areas the reliability of the underlying information is important in order to qualify analysis results. The reliability of geo-information derived from remote sensing data is usually assessed by ground truth validation or by comparison with other reference data. When applying methods of object based image analysis (OBIA and fuzzy classification, the degrees of fuzzy membership per object in general describe to what degree an object fits (prototypical class descriptions. Thus, analyzing the fuzzy membership degrees can contribute to the estimation of reliability and stability of classification results, even when no reference data are available. This paper presents an object based method using fuzzy class assignments to outline and classify three different classes of vegetation from GeoEye imagery. The classification result, its reliability and stability are evaluated using the reference-free parameters Best Classification Result and Classification Stability as introduced by Benz et al. in 2004 and implemented in the software package eCognition (www.ecognition.com. To demonstrate the application potentials of results a scenario for quantifying urban ‘greenness’ is presented.

13. Transmission Line Fault Clearing System Reliability Assessment: Application of Life Data Analysis with Weibull Distribution and Reliability Block Diagram

Directory of Open Access Journals (Sweden)

Mohd Iqbal Ridwan

2013-05-01

Full Text Available High voltage transmission lines are essential assets to electric utility companies as these lines transmit electricity generated by power stations to various regions throughout the country. Being exposed to the surrounding environments, transmission lines are susceptible to atmospheric conditions such as lightning strikes and flora and fauna encroachments. These conditions are called faults. Faults on transmission lines may cause disruption of electricity supply which will affect the overall power system and lead to a wide scale blackout. Therefore, fault clearing system is deployed to minimize the impact of the faults to the power system by disconnecting and isolating the affected transmission lines specifically. One of the main devices in fault  clearing system are the protective relays, which serve as the "brain" to provide the decision making element for correct protection and fault clearing operations. Without protective relays, fault clearing system is rendered useless. Hence, it is imperative for power utilities, such as Tenaga Nasional Berhad (TNB, which is an electric utility company in Malaysia, to assess the reliability of the protective relays. In this study, a statistical method called Life Data Analysis using Weibull Distribution is applied to assess the reliability of the protective relays.  Furthermore, the fault clearing system is modeled using Reliability Block Diagram to simulate the availability of the system and derive reliability indices which will assist TNB in managing the fault clearing system.

14. Analysis of human reliability in the APS of fire. Application of NUREG-1921

International Nuclear Information System (INIS)

An analysis of human reliability in a probabilistic safety analysis (APS) of fire aims to identify, describe, analyze and quantify, in a manner traceable, human actions that can affect the mitigation of an initiating event produced by a fire. (Author)

15. Markovian reliability analysis under uncertainty with an application on the shutdown system of the Clinch River Breeder Reactor

Energy Technology Data Exchange (ETDEWEB)

Papazoglou, I A; Gyftopoulos, E P

1978-09-01

A methodology for the assessment of the uncertainties about the reliability of nuclear reactor systems described by Markov models is developed, and the uncertainties about the probability of loss of coolable core geometry (LCG) of the Clinch River Breeder Reactor (CRBR) due to shutdown system failures, are assessed. Uncertainties are expressed by assuming the failure rates, the repair rates and all other input variables of reliability analysis as random variables, distributed according to known probability density functions (pdf). The pdf of the reliability is then calculated by the moment matching technique. Two methods have been employed for the determination of the moments of the reliability: the Monte Carlo simulation; and the Taylor-series expansion. These methods are adopted to Markovian problems and compared for accuracy and efficiency.

16. Space Shuttle Main Engine (SSME) Reliability and Analysis Evolution

Science.gov (United States)

Stephens, Walter E.; Rogers, James H.; Biggs, Robert E.

2010-01-01

The Space Shuttle Main Engine (SSME) is a large thrust class, reusable, staged combustion cycle rocket engine employing liquid hydrogen and liquid oxygen propellants. A cluster of three SSMEs is used on every space shuttle mission to propel the space shuttle orbiter vehicle into low earth orbit. Development of the SSME began in the early 70 s and the first flight of the space shuttle occurred in 1981. Today, the SSME has accrued over one million seconds of ground test and flight operational time, launching 129 space shuttle missions. Given that the SSME is used to launch a manned vehicle, its reliability must be commensurate for the task. At the same time, the SSME is a high performance, high power density engine which traditionally does not lend itself towards high reliability. Furthermore, throughout its history, the SSME operational envelope has been explored and expanded leading to several major test failures. Hence, assessing the reliability of the SSME throughout its history has been a challenging undertaking. This paper provides a review and discussion of SSME reliability assessment techniques and results over its history. Basic reliability drivers such as engine design, test program, major failures, redesigns and upgrades will also be discussed.

17. Comparative analysis of twelve Dothideomycete plant pathogens

Energy Technology Data Exchange (ETDEWEB)

Ohm, Robin; Aerts, Andrea; Salamov, Asaf; Goodwin, Stephen B.; Grigoriev, Igor

2011-03-11

The Dothideomycetes are one of the largest and most diverse groups of fungi. Many are plant pathogens and pose a serious threat to agricultural crops grown for biofuel, food or feed. Most Dothideomycetes have only a single host and related Dothideomycete species can have very diverse host plants. Twelve Dothideomycete genomes have currently been sequenced by the Joint Genome Institute and other sequencing centers. They can be accessed via Mycocosm which has tools for comparative analysis

18. Comparative Analysis on Constitutional Supervision Modes

OpenAIRE

Wang, Wenjing; Wang, Xiaorui

2012-01-01

Constitution is the fundamental law of a nation and also the general regulations on administering state affairs and ensuring national security. This is why constitutional supervision is so important for a country. However, there are still many problems existing under the supervision mechanism regarding to its operability, materiality, and rationality. This paper tries to give proper suggestions on perfecting Chinese constitutional supervision through comparative analysis and other co...

19. A Comparative Analysis of Influenza Vaccination Programs

OpenAIRE

Bansal, Shweta; Pourbohloul, Babak; Meyers, Lauren Ancel

2006-01-01

The threat of avian influenza and the 2004-2005 influenza vaccine supply shortage in the United States has sparked a debate about optimal vaccination strategies to reduce the burden of morbidity and mortality caused by the influenza virus. We present a comparative analysis of two classes of suggested vaccination strategies: mortality-based strategies that target high risk populations and morbidity-based that target high prevalence populations. Applying the methods of contact...

20. Comparative Analysis: A Feasible Software Engineering Method

OpenAIRE

Maneva, Nelly

2007-01-01

The reasonable choice is a critical success factor for decision- making in the field of software engineering (SE). A case-driven comparative analysis has been introduced and a procedure for its systematic application has been suggested. The paper describes how the proposed method can be built in a general framework for SE activities. Some examples of experimental versions of the framework are brie y presented.

1. A survey on reliability and safety analysis techniques of robot systems in nuclear power plants

International Nuclear Information System (INIS)

The reliability and safety analysis techniques was surveyed for the purpose of overall quality improvement of reactor inspection system which is under development in our current project. The contents of this report are : 1. Reliability and safety analysis techniques suvey - Reviewed reliability and safety analysis techniques are generally accepted techniques in many industries including nuclear industry. And we selected a few techniques which are suitable for our robot system. They are falut tree analysis, failure mode and effect analysis, reliability block diagram, markov model, combinational method, and simulation method. 2. Survey on the characteristics of robot systems which are distinguished from other systems and which are important to the analysis. 3. Survey on the nuclear environmental factors which affect the reliability and safety analysis of robot system 4. Collection of the case studies of robot reliability and safety analysis which are performed in foreign countries. The analysis results of this survey will be applied to the improvement of reliability and safety of our robot system and also will be used for the formal qualification and certification of our reactor inspection system

2. A survey on reliability and safety analysis techniques of robot systems in nuclear power plants

Energy Technology Data Exchange (ETDEWEB)

Eom, H.S.; Kim, J.H.; Lee, J.C.; Choi, Y.R.; Moon, S.S

2000-12-01

The reliability and safety analysis techniques was surveyed for the purpose of overall quality improvement of reactor inspection system which is under development in our current project. The contents of this report are : 1. Reliability and safety analysis techniques suvey - Reviewed reliability and safety analysis techniques are generally accepted techniques in many industries including nuclear industry. And we selected a few techniques which are suitable for our robot system. They are falut tree analysis, failure mode and effect analysis, reliability block diagram, markov model, combinational method, and simulation method. 2. Survey on the characteristics of robot systems which are distinguished from other systems and which are important to the analysis. 3. Survey on the nuclear environmental factors which affect the reliability and safety analysis of robot system 4. Collection of the case studies of robot reliability and safety analysis which are performed in foreign countries. The analysis results of this survey will be applied to the improvement of reliability and safety of our robot system and also will be used for the formal qualification and certification of our reactor inspection system.

3. Application of reliability analysis method to fusion component testing

International Nuclear Information System (INIS)

The term reliability here implies that a component satisfies a set of performance criteria while under specified conditions of use over a specified period of time. For fusion nuclear technology, the reliability goal to be pursued is the development of a mean time between failures (MTBF) for a component which is longer than its lifetime goal. While the component lifetime is mainly determined by the fluence limitation (i.e., damage level) which leads to performance degradation or failure, the MTBF represents an arithmetic average life of all units in a population. One method of assessing the reliability goal involves determining component availability needs to meet the goal plant availability, defining a test-analyze-fix development program to improve component reliability, and quantifying both test times and the number of test articles that would be required to ensure that a specified target MTBF is met. Statistically, constant failure rates and exponential life distributions are assumed for analyses and blanket component development is used as an example. However, as data are collected the probability distribution of the parameter of interest can be updated in a Bayesian fashion. The nuclear component testing program will be structured such that reliability requirements for DEMO can be achieved. The program shall not exclude the practice of a good design (such as reducing the complexity of the system to the minimum essential for the required operation), the execution of high quality manufacturing and inspection processes, and the implication of quality assurance and control for component development. In fact, the assurance of a high quality testing/development program is essential so that there is no question left for reliability

4. Reliability analysis of the automatic control and power supply of reactor equipment

International Nuclear Information System (INIS)

Based on reliability analysis the shortcomings of nuclear facilities are discovered. Fault tree types constructed for the technology of automatic control and for power supply serve as input data of the ORCHARD 2 computer code. In order to charaterize the reliability of the system, availability, failure rates and time intervals between failures are calculated. The results of the reliability analysis of the feedwater system of the Paks Nuclear Power Plant showed that the system consisted of elements of similar reliabilities. (V.N.) 8 figs.; 3 tabs

5. Reliability Analysis of Bearing Capacity of Large-Diameter Piles under Osterberg Test

Directory of Open Access Journals (Sweden)

Lei Nie

2013-05-01

Full Text Available This study gives the reliability analysis of bearing capacity of large-diameter piles under osterberg test. The limit state equation of dimensionless random variables is utilized in the reliability analysis of vertical bearing capacity of large-diameter piles based on Osterberg loading tests. And the reliability index and the resistance partial coefficient under the current specifications are calculated using calibration method. The results show: the reliable index of large-diameter piles is correlated with the load effect ratio and is smaller than the ordinary piles; resistance partial coefficient of 1.53 is proper in design of large-diameter piles.

6. Mathematical modeling and reliability analysis of a 3D Li-ion battery

Directory of Open Access Journals (Sweden)

RICHARD HONG PENG LIANG

2014-02-01

Full Text Available The three-dimensional (3D Li-ion battery presents an effective solution to issues affecting its two-dimensional counterparts, as it is able to attain high energy capacities for the same areal footprint without sacrificing power density. A 3D battery has key structural features extending in and fully utilizing 3D space, allowing it to achieve greater reliability and longevity. This study applies an electrochemical-thermal coupled model to a checkerboard array of alternating positive and negative electrodes in a 3D architecture with either square or circular electrodes. The mathematical model comprises the transient conservation of charge, species, and energy together with electroneutrality, constitutive relations and relevant initial and boundary conditions. A reliability analysis carried out to simulate malfunctioning of either a positive or negative electrode reveals that although there are deviations in electrochemical and thermal behavior for electrodes adjacent to the malfunctioning electrode as compared to that in a fully-functioning array, there is little effect on electrodes further away, demonstrating the redundancy that a 3D electrode array provides. The results demonstrate that implementation of 3D batteries allow it to reliably and safely deliver power even if a component malfunctions, a strong advantage over conventional 2D batteries.

7. Reliability modeling and analysis of smart power systems

CERN Document Server

Karki, Rajesh; Verma, Ajit Kumar

2014-01-01

The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

8. Analysis of Software Reliability Data using Exponential Power Model

OpenAIRE

Ashwini Kumar Srivastava; Vijay Kumar

2011-01-01

In this paper, Exponential Power (EP) model is proposed to analyze the software reliability data and the present work is an attempt to represent that the model is as software reliability model. The approximate MLE using Artificial Neural Network (ANN) method and the Markov chain Monte Carlo (MCMC) methods are used to estimate the parameters of the EP model. A procedure is developed to estimate the parameters of the EP model using MCMC simulation method in OpenBUGS by incorporating a module in...

9. Reliability analysis of External Tank Attack Ring (ETA)

Science.gov (United States)

Putcha, Chandra S.

1992-01-01

The present study is restricted to External Tank Attachment Rings (ETA), but the concepts discussed are general in nature and can be applied to any structural component. The objective of this research work is to use some of the existing probabilistic methods to calculate the reliability of ETA Rings at various critical sections for the limit state of stress. This is done both in terms of the traditional probability of failure and reliability levels as well as the well known safety indices (beta) which have become a commonly accepted measure of safety.

10. Embedded mechatronic systems 1 analysis of failures, predictive reliability

CERN Document Server

El Hami, Abdelkhalak

2015-01-01

In operation, mechatronics embedded systems are stressed by loads of different causes: climate (temperature, humidity), vibration, electrical and electromagnetic. These stresses in components which induce failure mechanisms should be identified and modeled for better control. AUDACE is a collaborative project of the cluster Mov'eo that address issues specific to mechatronic reliability embedded systems. AUDACE means analyzing the causes of failure of components of mechatronic systems onboard. The goal of the project is to optimize the design of mechatronic devices by reliability. The projec

11. Three suggestions on the definition of terms for the safety and reliability analysis of digital systems

International Nuclear Information System (INIS)

As digital instrumentation and control systems are being progressively introduced into nuclear power plants, a growing number of related technical issues are coming to light needing to be resolved. As a result, an understanding of relevant terms and basic concepts becomes increasingly important. Under the framework of the OECD/NEA WGRISK DIGREL Task Group, the authors were involved in reviewing definitions of terms forming the supporting vocabulary for addressing issues related to the safety and reliability analysis of digital instrumentation and control (SRA of DI and C). These definitions were extracted from various standards regulating the disciplines that form the technical and scientific basis of SRA DI and C. The authors discovered that different definitions are provided by different standards within a common discipline and used differently across various disciplines. This paper raises the concern that a common understanding of terms and basic concepts has not yet been established to address the very specific technical issues facing SRA DI and C. Based on the lessons learned from the review of the definitions of interest and the analysis of dependency relationships existing between these definitions, this paper establishes a set of recommendations for the development of a consistent terminology for SRA DI and C. - Highlights: ?We reviewed definitions of terms used in reliability analysis of digital systems. ?Different definitions are provided by different standards within a common discipline. ?Acyclic and cyclic structures of dependency in defining terms are compared. ?Three recommendations for the development of a consistent terminology provided

12. Reliability and Sensitivity Analysis of Cast Iron Water Pipes for Agricultural Food Irrigation

Directory of Open Access Journals (Sweden)

Yanling Ni

2014-07-01

Full Text Available This study aims to investigate the reliability and sensitivity of cast iron water pipes for agricultural food irrigation. The Monte Carlo simulation method is used for fracture assessment and reliability analysis of cast iron pipes for agricultural food irrigation. Fracture toughness is considered as a limit state function for corrosion affected cast iron pipes. Then the influence of failure mode on the probability of pipe failure has been discussed. Sensitivity analysis also is carried out to show the effect of changing basic parameters on the reliability and life time of the pipe. The analysis results show that the applied methodology can consider different random variables for estimating of life time of the pipe and it can also provide scientific guidance for rehabilitation and maintenance plans for agricultural food irrigation. In addition, the results of the failure and reliability analysis in this study can be useful for designing of more reliable new pipeline systems for agricultural food irrigation.

13. Using minimal spanning trees to compare the reliability of network topologies

Science.gov (United States)

Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.

1990-01-01

Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.

14. Reliability Analysis in Parallel and Distributed Systems with Network Contention

Directory of Open Access Journals (Sweden)

Hussein EL Ghor

2012-11-01

Full Text Available This paper tackles the reliability problem of task allocation in heterogeneous distributed systems in the presence of network contention. A large number of scheduling heuristics has been presented in literature, but most of them target maximizing the system reliability without taking network contention delay into consideration. In this paper, we deal with a more realistic model for heterogeneous networks of workstations by taking network contention as an important factor in our study. Although network contention is not considered in task scheduling, yet it has a great effect on the execution time of a parallel program. In our work, we rely on the hybrid algorithm investigated in [8] but with a new system model that allows us to capture network contention. We first develop a mathematical model for reliability based on the unreliability cost function caused by the execution of tasks on the system processors and by the inter-processor communication link where network contention caused by the inter-processor communication time in the link is considered as the main constraint. We then propose an evaluation function that approximates the total completion time of a given assignment by taking into account communication delays caused by network contention. In order to demonstrate the benefits of our model, we evaluate it by means of simulation. We show the significant improved accuracy and reliability of the produced schedules.

15. Reliability analysis of common hazardous waste treatment processes

International Nuclear Information System (INIS)

Five hazardous waste treatment processes are analyzed probabilistically using Monte Carlo simulation to elucidate the relationships between process safety factors and reliability levels. The treatment processes evaluated are packed tower aeration, reverse osmosis, activated sludge, upflow anaerobic sludge blanket, and activated carbon adsorption

16. Architecture-Based Reliability Analysis of Web Services

Science.gov (United States)

Rahmani, Cobra Mariam

2012-01-01

In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

17. Reliability analysis of common hazardous waste treatment processes

Energy Technology Data Exchange (ETDEWEB)

Waters, R.D. [Vanderbilt Univ., Nashville, TN (United States)

1993-05-01

Five hazardous waste treatment processes are analyzed probabilistically using Monte Carlo simulation to elucidate the relationships between process safety factors and reliability levels. The treatment processes evaluated are packed tower aeration, reverse osmosis, activated sludge, upflow anaerobic sludge blanket, and activated carbon adsorption.

18. Review of the human reliability analysis performed for Empire State Electric Energy Research Corporation

International Nuclear Information System (INIS)

The Empire State Electric Energy Research Corporation (ESEERCO) commissioned Westinghouse to conduct a human reliability analysis to identify and quantify human error probabilities associated with operator actions for four specific events which may occur in light water reactors: loss of coolant accident, steam generator tube rupture, steam/feed line break, and stuck open pressurizer spray valve. Human Error Probabilities (HEPs) derived from Swain's Technique for Human Error Rate Prediction (THERP) were compared to data obtained from simulator exercises. A correlation was found between the HEPs derived from Swain and the results of the simulator data. The results of this study provide a unique insight into human factors analysis. The HEPs obtained from such probabilistic studies can be used to prioritize scenarios for operator training situations, and thus improve the correlation between simulator exercises and real control room experiences

19. Root cause analysis in support of reliability enhancement of engineering components

International Nuclear Information System (INIS)

Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

20. Analysis of structure reliability on beam using fuzzy finite element method

OpenAIRE

A.K.Ariffin; Tan, S. C.; C. T. Ng; A. Y. N. Yusmye

2013-01-01

The main requirement in designing a structure is to ensure the structure is reliable enough to withstand any loading. However, in the real world, for structural analysis, the presence of uncertainties in the input variable has reduced the accuracy of the calculated structural reliability. The purpose of this study is to determine the structural reliability with the consideration of uncertainties involved. The developed simulation method is the fuzzy set theory incorporating with the finite el...

1. Asymptotic Sampling for Reliability Analysis of Adhesive Bonded Stepped Lap Composite Joints

DEFF Research Database (Denmark)

Kimiaeifar, Amin; Lund, Erik; Thomsen, Ole Thybo; Sørensen, John Dalsgaard

2013-01-01

Reliability analysis coupled with finite element analysis (FEA) of composite structures is computationally very demanding and requires a large number of simulations to achieve an accurate prediction of the probability of failure with a small standard error. In this paper Asymptotic Sampling, which is a promising and time efficient tool to calculate the probability of failure, is utilized, and a probabilistic model for the reliability analysis of adhesive bonded stepped lap composite joints, repr...

2. Reliability analysis of the emergency power supply system for the 10 MW high temperature gas-cooled reactor

International Nuclear Information System (INIS)

The features of the emergency power supply system of the 10 MW High Temperature Gas-cooled Reactor (HTR-10) are briefly introduced, and the reliability of the emergency power supply system of the HTR-10 is calculated using fault-tree analysis method. The calculating results are compared with those obtained using the standard nuclear power plant design method. All the results show that the emergency power supply system of the HTR-10 is not only with high reliability but also quite simple as well as economic

3. Analysis of the Component-Based Reliability in Computer Networks

Scientific Electronic Library Online (English)

Saulius, Minkevi< img width=12 height=19 src=" http:/fbpe/img/cubo/v12n1/img27.jpg" > ius.

Full Text Available Desempeño en términos de fiabilidad de redes de computador notiva este artículo. Teoremas límite sobre la duración extrema de cola y el tiempo de espera virtual extremo en redes de cola abierta en trafico pesado sao derivados y aplicados a un modelo de fiabilidad para redes de computador donde relac [...] ionamos el tiempo de falha de una red de computador al sistema de parámetros. Abstract in english Performance in terms of reliability of computer networks motivates this paper. Limit theorems on the extreme queue length and extreme virtual waiting time in open queueing networks in heavy traffic are derived and applied to a reliability model for computer networks where we relate the time of failu [...] re of a computer network to the system parameters.

4. International Nuclear Information System (INIS)

This paper deals with the reliability assessment of passive systems that have been developed in recent years by suppliers, industries, utilities, and research organizations, aimed at plant safety improvement and substantial simplification in its implementation. The present study concerns the passive decay heat removal systems that use, for the most part, a condenser immersed in a cooling pool. The focus of the paper is a reliability study of the isolation condenser system foreseen for advanced boiling water reactors (BWRs) for the removal of the excess sensible and core decay heat from the BWR by natural circulation. Furthermore, an approach aimed at the thermal-hydraulic performance assessment (i.e., the natural circulation failure evaluation) from the probability point of view is given. The study is not plant-specific-related but pertains to the conceptual design of the foregoing system

5. Reliability Analysis and Standardization of Spacecraft Command Generation Processes

Science.gov (United States)

Meshkat, Leila; Grenander, Sven; Evensen, Ken

2011-01-01

center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.

6. Comparative Analysis of Virtual Education Applications

OpenAIRE

Kurt, Mehmet

2006-01-01

The research was conducted in order to make comparative analysis of virtual education applications. The research is conducted in survey model. The study group consists of total 300 institutes providing virtual education in the fall, spring and summer semesters of 2004; 246 in USA, 10 in Australia, 3 in South Africa, 10 in India, 21 in UK, 6 in Japan, 4 in Turkey. The information has been collected by online questionnaire sent to the target mass by e-mail. The questionnaire has been developed ...

7. Comparative Analysis of Hand Gesture Recognition Techniques

Directory of Open Access Journals (Sweden)

Arpana K. Patel

2015-03-01

Full Text Available During past few years, human hand gesture for interaction with computing devices has continues to be active area of research. In this paper survey of hand gesture recognition is provided. Hand Gesture Recognition is contained three stages: Pre-processing, Feature Extraction or matching and Classification or recognition. Each stage contains different methods and techniques. In this paper define small description of different methods used for hand gesture recognition in existing system with comparative analysis of all method with its benefits and drawbacks are provided.

8. Stochastic Analysis on RAID Reliability for Solid-State Drives

OpenAIRE

Li, Yongkun; Lee, Patrick P. C.; Lui, John C. S.

2013-01-01

Solid-state drives (SSDs) have been widely deployed in desktops and data centers. However, SSDs suffer from bit errors, and the bit error rate is time dependent since it increases as an SSD wears down. Traditional storage systems mainly use parity-based RAID to provide reliability guarantees by striping redundancy across multiple devices, but the effectiveness of RAID in SSDs remains debatable as parity updates aggravate the wearing and bit error rates of SSDs. In particular...

9. Launch and Assembly Reliability Analysis for Human Space Exploration Missions

Science.gov (United States)

Cates, Grant; Gelito, Justin; Stromgren, Chel; Cirillo, William; Goodliff, Kandyce

2012-01-01

NASA's future human space exploration strategy includes single and multi-launch missions to various destinations including cis-lunar space, near Earth objects such as asteroids, and ultimately Mars. Each campaign is being defined by Design Reference Missions (DRMs). Many of these missions are complex, requiring multiple launches and assembly of vehicles in orbit. Certain missions also have constrained departure windows to the destination. These factors raise concerns regarding the reliability of launching and assembling all required elements in time to support planned departure. This paper describes an integrated methodology for analyzing launch and assembly reliability in any single DRM or set of DRMs starting with flight hardware manufacturing and ending with final departure to the destination. A discrete event simulation is built for each DRM that includes the pertinent risk factors including, but not limited to: manufacturing completion; ground transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to trans-destination-injection. Each reliability factor can be selectively activated or deactivated so that the most critical risk factors can be identified. This enables NASA to prioritize mitigation actions so as to improve mission success.

10. Reliability Analysis Of Fire System On The Industry Facility By Use Fameca Method

International Nuclear Information System (INIS)

FAMECA is one of the analysis method to determine system reliability on the industry facility. Analysis is done by some procedure that is identification of component function, determination of failure mode, severity level and effect of their failure. Reliability value is determined by three combinations that is severity level, component failure value and critical component. Reliability of analysis has been done for fire system on the industry by FAMECA method. Critical component which identified is pump, air release valve, check valve, manual test valve, isolation valve, control system etc

11. Convergence among Data Sources, Response Bias, and Reliability and Validity of a Structured Job Analysis Questionnaire.

Science.gov (United States)

Smith, Jack E.; Hakel, Milton D.

1979-01-01

Examined are questions pertinent to the use of the Position Analysis Questionnaire: Who can use the PAQ reliably and validly? Must one rely on trained job analysts? Can people having no direct contact with the job use the PAQ reliably and validly? Do response biases influence PAQ responses? (Author/KC)

12. Reliability of three-dimensional gait analysis in cervical spondylotic myelopathy.

LENUS (Irish Health Repository)

McDermott, Ailish

2010-10-01

Gait impairment is one of the primary symptoms of cervical spondylotic myelopathy (CSM). Detailed assessment is possible using three-dimensional gait analysis (3DGA), however the reliability of 3DGA for this population has not been established. The aim of this study was to evaluate the test-retest reliability of temporal-spatial, kinematic and kinetic parameters in a CSM population.

13. Development of an analysis rule of diagnosis error for standard method of human reliability analysis

Energy Technology Data Exchange (ETDEWEB)

Jeong, W. D.; Kang, D. I.; Jeong, K. S. [KAERI, Taejon (Korea, Republic of)

2003-10-01

This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules.

14. Method of predicting operating reliability indexes and method of reliability analysis of new power stage of WWER type nuclear power plant

International Nuclear Information System (INIS)

PRIS IAEA data were used for predicting operating reliability of the WWER-1000 unit as were data on the operation of the V-1 nuclear power plant in Jaslovske Bohunice drawn from regular monthly reports. The analysis of the data was oriented mainly to determining reliability indexes for partial technological assemblies, and allowed to determine operating and reliability indexes for an average nuclear power plant with 440, 500 and 1000 MW units. The results of the analysis may be used for determining the reliability of the technological equipment of the WWER-1000 unit using the Bayes formula derived assuming the applicability of total probability. (J.B.)

15. Comparative transcriptome analysis of four prymnesiophyte algae.

Science.gov (United States)

Koid, Amy E; Liu, Zhenfeng; Terrado, Ramon; Jones, Adriane C; Caron, David A; Heidelberg, Karla B

2014-01-01

Genomic studies of bacteria, archaea and viruses have provided insights into the microbial world by unveiling potential functional capabilities and molecular pathways. However, the rate of discovery has been slower among microbial eukaryotes, whose genomes are larger and more complex. Transcriptomic approaches provide a cost-effective alternative for examining genetic potential and physiological responses of microbial eukaryotes to environmental stimuli. In this study, we generated and compared the transcriptomes of four globally-distributed, bloom-forming prymnesiophyte algae: Prymnesium parvum, Chrysochromulina brevifilum, Chrysochromulina ericina and Phaeocystis antarctica. Our results revealed that the four transcriptomes possess a set of core genes that are similar in number and shared across all four organisms. The functional classifications of these core genes using the euKaryotic Orthologous Genes (KOG) database were also similar among the four study organisms. More broadly, when the frequencies of different cellular and physiological functions were compared with other protists, the species clustered by both phylogeny and nutritional modes. Thus, these clustering patterns provide insight into genomic factors relating to both evolutionary relationships as well as trophic ecology. This paper provides a novel comparative analysis of the transcriptomes of ecologically important and closely related prymnesiophyte protists and advances an emerging field of study that uses transcriptomics to reveal ecology and function in protists. PMID:24926657

16. Reliability analysis study of digital reactor protection system in nuclear power plant

International Nuclear Information System (INIS)

The Digital I and C systems are believed to improve a plant's safety and reliability generally. The reliability analysis of digital I and C system has become one research hotspot. Traditional fault tree method is one of means to quantify the digital I and C system reliability. One typical digital protection system special for advanced reactor has been developed in this paper, which reliability evaluation is necessary for design demonstration. The typical digital protection system construction is introduced in the paper, and the process of FMEA and fault tree application to the digital protection system reliability evaluation are described. Reliability data and bypass logic modeling are two points giving special attention in the paper. Because the factors about time sequence and feedback not exist in reactor protection system obviously, the dynamic feature of digital system is not discussed. (authors)

17. An approach of HALT and Failure analysis for Product Reliability Improvement(An Application to controller for fan module)

International Nuclear Information System (INIS)

HALT(Highly Accelerated Life Test) is the new technology for reliability assurance. The merit of HALT is a short period of the test time(about 3 to 7 days). This paper is an application of HALT and FA(Failure analysis) to improve the reliability of the fan module. Before HALT, some environmental test results were good. But we could not assure the reliability level of the test sample. So, we choose the technique of HALT to compare the test sample with a same product of the other leading company. After HALT, we found some defects(solder crack, cut of capacitor lead, varistor burning, etc) and we applied some FA technique to improve the reliability of fan module. After HALT and FA. We suggested some methods to improve the reliability of the module. So, the manufacturer applied design change and part replacement to the new fan module. After the last HALT about the new fan module, we prove the reliability growth

18. Failure Analysis Methods for Reliability Improvement of Electronic Sensors

Directory of Open Access Journals (Sweden)

Swajeeth Pilot. Panchangam

2012-08-01

Full Text Available This paper has documented the common failuremodes of electronic sensors. The effects of failure modes arestudied in detail and these are classified based on their criticalityand probability of occurrence. Methods for taking correctiveactions for eliminating the occurrence of various failure modesare also proposed. The paper also addresses FRACAS method andits effectiveness for reliability studies of sensors based on the realfailure modes observed in practice. It is understood that thedesigner has an important role in elimination of the failure modesat the design stage itself. This is expected to result in reliabilitygrowth of sensor systems used in many critical systems such asspace applications, nuclear power plants, and chemical industriesetc.

19. Reliability analysis of digital instrumentation and control systems

International Nuclear Information System (INIS)

The NUREG-CR/6942 technical report proposed a Markov state transition model for the main feedwater valve (MFV) controller system as part of a Probabilistic Risk Assessment (PRA) of Digital Feedwater Control System (DFWCS). The proposed model extends the Markov model to allow the use of non-exponential distribution in the time to next output of the controller system responsible for maintaining the water level. This case study demonstrates the general application of semi-Markov process model for digital instrumentation and control systems. System failure probability and mission reliability measures are determined. (author)

20. A study in the reliability analysis method for nuclear power plant structures (I)

International Nuclear Information System (INIS)

Nuclear power plant structures may be exposed to aggressive environmental effects that may cause their strength and stiffness to decrease over their service life. Although the physics of these damage mechanisms are reasonably well understood and quantitative evaluation of their effects on time-dependent structural behavior is possible in some instances, such evaluations are generally very difficult and remain novel. The assessment of existing steel containment in nuclear power plants for continued service must provide quantitative evidence that they are able to withstand future extreme loads during a service period with an acceptable level of reliability. Rational methodologies to perform the reliability assessment can be developed from mechanistic models of structural deterioration, using time-dependent structural reliability analysis to take loading and strength uncertainties into account. The final goal of this study is to develop the analysis method for the reliability of containment structures. The cause and mechanism of corrosion is first clarified and the reliability assessment method has been established. By introducing the equivalent normal distribution, the procedure of reliability analysis which can determine the failure probabilities has been established. The influence of design variables to reliability and the relation between the reliability and service life will be continued second year research

1. Comparation studies of uranium analysis method using spectrophotometer and voltammeter

International Nuclear Information System (INIS)

Comparation studies of uranium analysis method by spectrophotometer and voltammeter had been done. The objective of experiment is to examine the reliability of analysis method and instrument performance by evaluate parameters; linearity, accuracy, precision and detection limit. Uranyl nitrate hexahydrate is used as standard, and the sample is solvent mixture of tributyl phosphate and kerosene containing uranium (from phosphoric acid purification unit Petrokimia Gresik). Uranium (U) stripping in the sample use HN03 0,5 N and then was analyzed by using of both instrument. Analysis of standard show that both methods give a good linearity by correlation coefficient > 0,999. Spectrophotometry give accuration 99,34 - 101,05 % with ratio standard deviation (RSD) 1,03 %; detection limit (DL) 0,05 ppm. Voltammetry give accuration 95,63 -101,49 % with RSD 3,91 %; detection limit (DL) 0,509 ppm. On the analysis of sludge samples were given the significantly different in result; spectrophotometry give U concentration 4,445 ppm by RSD 6,74 % and voltammetry give U concentration 7,693 by RSD 19,53%. (author)

2. Time-dependent reliability analysis of flood defences

International Nuclear Information System (INIS)

This paper describes the underlying theory and a practical process for establishing time-dependent reliability models for components in a realistic and complex flood defence system. Though time-dependent reliability models have been applied frequently in, for example, the offshore, structural safety and nuclear industry, application in the safety-critical field of flood defence has to date been limited. The modelling methodology involves identifying relevant variables and processes, characterisation of those processes in appropriate mathematical terms, numerical implementation, parameter estimation and prediction. A combination of stochastic, hierarchical and parametric processes is employed. The approach is demonstrated for selected deterioration mechanisms in the context of a flood defence system. The paper demonstrates that this structured methodology enables the definition of credible statistical models for time-dependence of flood defences in data scarce situations. In the application of those models one of the main findings is that the time variability in the deterioration process tends to be governed the time-dependence of one or a small number of critical attributes. It is demonstrated how the need for further data collection depends upon the relevance of the time-dependence in the performance of the flood defence system.

3. On Bayesian reliability analysis with informative priors and censoring

International Nuclear Information System (INIS)

In the statistical literature many methods have been presented to deal with censored observations, both within the Bayesian and non-Bayesian frameworks, and such methods have been successfully applied to, e.g., reliability problems. Also, in reliability theory it is often emphasized that, through shortage of statistical data and possibilities for experiments, one often needs to rely heavily on judgements of engineers, or other experts, for which means Bayesian methods are attractive. It is therefore important that such judgements can be elicited easily to provide informative prior distributions that reflect the knowledge of the engineers well. In this paper we focus on this aspect, especially on the situation that the judgements of the consulted engineers are based on experiences in environments where censoring has also been present previously. We suggest the use of the attractive interpretation of hyperparameters of conjugate prior distributions when these are available for assumed parametric models for lifetimes, and we show how one may go beyond the standard conjugate priors, using similar interpretations of hyper-parameters, to enable easier elicitation when censoring has been present in the past. This may even lead to more flexibility for modelling prior knowledge than when using standard conjugate priors, whereas the disadvantage of more complicated calculations that may be needed to determine posterior distributions play a minor role due to the advanced mathematical and statistical software that is widely available these days

4. Reliability of three-dimensional gait analysis in adults with acquired incomplete spinal cord injury

OpenAIRE

Wedege, Pia

2013-01-01

Background: Incomplete spinal cord injury (SCI) results in varying degrees of gait impairments. Three-dimensional (3D) gait analysis has been recommended as part of a standardised gait assessment for individuals with incomplete SCI. However, reliability of 3D gait analysis has not been established for this population. The aim of the present study was to investigate intra- and inter-session reliability of gait kinematics in a group of individuals with incomplete SCI. We also sought to estimate...

5. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

CERN Document Server

Nikulin, M; Mesbah, M; Limnios, N

2004-01-01

Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

6. Comparative Genome Analysis of Basidiomycete Fungi

Energy Technology Data Exchange (ETDEWEB)

Riley, Robert; Salamov, Asaf; Morin, Emmanuelle; Nagy, Laszlo; Manning, Gerard; Baker, Scott; Brown, Daren; Henrissat, Bernard; Levasseur, Anthony; Hibbett, David; Martin, Francis; Grigoriev, Igor

2012-03-19

Fungi of the phylum Basidiomycota (basidiomycetes), make up some 37percent of the described fungi, and are important in forestry, agriculture, medicine, and bioenergy. This diverse phylum includes the mushrooms, wood rots, symbionts, and plant and animal pathogens. To better understand the diversity of phenotypes in basidiomycetes, we performed a comparative analysis of 35 basidiomycete fungi spanning the diversity of the phylum. Phylogenetic patterns of lignocellulose degrading genes suggest a continuum rather than a sharp dichotomy between the white rot and brown rot modes of wood decay. Patterns of secondary metabolic enzymes give additional insight into the broad array of phenotypes found in the basidiomycetes. We suggest that the profile of an organism in lignocellulose-targeting genes can be used to predict its nutritional mode, and predict Dacryopinax sp. as a brown rot; Botryobasidium botryosum and Jaapia argillacea as white rots.

7. Compare containment subcompartment analysis code evaluation

International Nuclear Information System (INIS)

Nuclear power plant subcompartment analyses are required to determine the containment pressure distribution that might result from a loss-of-coolant accident. The pressure distribution is used to calculate structural and mechanical design loads. The COMPARE code is used widely to perform subcompartment analysis. However, several simplifying assumptions are utilized to facilitate solution of the complex transient, two-phase, multidimensional flow problem. In particular, it is assumed that the flow is homogeneous, in thermodynamic equilibrium, and one-dimensional. In this study, these assumptions are evaluated by performing simplified transport and relaxation analyses. This results in definition of (a) geometric features and early-time periods that produce significant deviations from reality and (b) specific areas that require further study

8. Construction QA/QC systems: comparative analysis

International Nuclear Information System (INIS)

An analysis which compares the quality assurance/quality control (QA/QC) systems adopted in the highway, nuclear power plant, and U.S. Navy construction areas with the traditional quality control approach used in building construction is presented. Full participation and support by the owner as well as the contractor and AE firm are required if a QA/QC system is to succeed. Process quality control, acceptance testing and quality assurance responsibilities must be clearly defined in the contract documents. The owner must audit these responsibilities. A contractor quality control plan, indicating the tasks which will be performed and the fact that QA/QC personnel are independent of project time/cost pressures should be submitted for approval. The architect must develop realistic specifications which consider the natural variability of material. Acceptance criteria based on the random sampling technique should be used. 27 refs

9. Comparative proteomic analysis of compartmentalised Ras signalling.

Science.gov (United States)

2015-01-01

Ras proteins are membrane bound signalling hubs that operate from both the cell surface and endomembrane compartments. However, the extent to which intracellular pools of Ras can contribute to cell signalling is debated. To address this, we have performed a global screen of compartmentalised Ras signalling. We find that whilst ER/Golgi- and endosomal-Ras only generate weak outputs, Ras localised to the mitochondria or Golgi significantly and distinctly influence both the abundance and phosphorylation of a wide range of proteins analysed. Our data reveal that ~80% of phosphosites exhibiting large (?1.5-fold) changes compared to control can be modulated by organellar Ras signalling. The majority of compartmentalised Ras-specific responses are predicted to influence gene expression, RNA splicing and cell proliferation. Our analysis reinforces the concept that compartmentalisation influences Ras signalling and provides detailed insight into the widespread modulation of responses downstream of endomembranous Ras signalling. PMID:26620772

10. Reliability analysis of hierarchical computer-based systems subject to common-cause failures

International Nuclear Information System (INIS)

The results from reliability modeling and analysis are key contributors to design and tuning activities for computer-based systems. Each architecture style, however, poses different challenges for which analytical approaches must be developed or modified. The challenge we address in this paper is the reliability analysis of hierarchical computer-based systems (HS) with common-cause failures (CCF). The dependencies among components introduced by CCF complicate the reliability analysis of HS, especially when components affected by a common cause exist on different hierarchical levels. We propose an efficient decomposition and aggregation (EDA) approach for incorporating CCF into the reliability evaluation of HS. Our approach is to decompose an original HS reliability analysis problem with CCF into a number of reduced reliability problems freed from the CCF concerns. The approach is represented in a dynamic fault tree by a proposed CCF gate modeled after the functional dependency gate. We present the basics of the EDA approach by working through a hypothetical analysis of a HS subject to CCF and show how it can be extended to an analysis of a hierarchical phased-mission system subject to different CCF depending on mission phases

11. Update of the human reliability analysis for a nuclear power plant

International Nuclear Information System (INIS)

Human reliability analysis is a systematic framework, which includes the process of evaluation of human performance and associated impacts on structures, systems and components for a complex facility. The update of Human Reliability Analysis in Probabilistic Safety Assessment of a Nuclear Power Plant requires the development of an overall method for the human reliability analysis. The update is needed as the original human reliability analysis was performed years ago, as the methods have been improved, as the requirements for performing the analyses have changed and as the additional good practice was gained in the mean time. The method for update of human reliability analysis is developed with consideration of the current requirements and the good practice. The selected features of existing methods and the selected specific features are introduced into the method. The evaluation is performed and the preliminary results of human reliability analysis are introduced into the probabilistic safety assessment model. The preliminary results of evaluating the probabilistic safety assessment model identify the key risk contributors and the areas for possible improvement. (author)

12. Comparative analysis of selected fuel cell vehicles

Energy Technology Data Exchange (ETDEWEB)

NONE

1993-05-07

Vehicles powered by fuel cells operate more efficiently, more quietly, and more cleanly than internal combustion engines (ICEs). Furthermore, methanol-fueled fuel cell vehicles (FCVs) can utilize major elements of the existing fueling infrastructure of present-day liquid-fueled ICE vehicles (ICEVs). DOE has maintained an active program to stimulate the development and demonstration o fuel cell technologies in conjunction with rechargeable batteries in road vehicles. The purpose of this study is to identify and assess the availability of data on FCVs, and to develop a vehicle subsystem structure that can be used to compare both FCVs and ICEV, from a number of perspectives--environmental impacts, energy utilization, materials usage, and life cycle costs. This report focuses on methanol-fueled FCVs fueled by gasoline, methanol, and diesel fuel that are likely to be demonstratable by the year 2000. The comparative analysis presented covers four vehicles--two passenger vehicles and two urban transit buses. The passenger vehicles include an ICEV using either gasoline or methanol and an FCV using methanol. The FCV uses a Proton Exchange Membrane (PEM) fuel cell, an on-board methanol reformer, mid-term batteries, and an AC motor. The transit bus ICEV was evaluated for both diesel and methanol fuels. The transit bus FCV runs on methanol and uses a Phosphoric Acid Fuel Cell (PAFC) fuel cell, near-term batteries, a DC motor, and an on-board methanol reformer. 75 refs.

13. A survey on the human reliability analysis methods for the design of Korean next generation reactor

Energy Technology Data Exchange (ETDEWEB)

Lee, Yong Hee; Lee, J. W.; Park, J. C.; Kwack, H. Y.; Lee, K. Y.; Park, J. K.; Kim, I. S.; Jung, K. W

2000-03-01

Enhanced features through applying recent domestic technologies may characterize the safety and efficiency of KNGR(Korea Next Generation Reactor). Human engineered interface and control room environment are expected to be beneficial to the human aspects of KNGR design. However, since the current method for human reliability analysis is not up to date after THERP/SHARP, it becomes hard to assess the potential of human errors due to both of the positive and negative effect of the design changes in KNGR. This is a state of the art report on the human reliability analysis methods that are potentially available for the application to the KNGR design. We surveyed every technical aspects of existing HRA methods, and compared them in order to obtain the requirements for the assessment of human error potentials within KNGR design. We categorized the more than 10 methods into the first and the second generation according to the suggestion of Dr. Hollnagel. THERP was revisited in detail. ATHEANA proposed by US NRC for an advanced design and CREAM proposed by Dr. Hollnagel were reviewed and compared. We conclude that the key requirements might include the enhancement in the early steps for human error identification and the quantification steps with considerations of more extended error shaping factors over PSFs(performance shaping factors). The utilization of the steps and approaches of ATHEANA and CREAM will be beneficial to the attainment of an appropriate HRA method for KNGR. However, the steps and data from THERP will be still maintained because of the continuity with previous PSA activities in KNGR design.

14. A survey on the human reliability analysis methods for the design of Korean next generation reactor

International Nuclear Information System (INIS)

Enhanced features through applying recent domestic technologies may characterize the safety and efficiency of KNGR(Korea Next Generation Reactor). Human engineered interface and control room environment are expected to be beneficial to the human aspects of KNGR design. However, since the current method for human reliability analysis is not up to date after THERP/SHARP, it becomes hard to assess the potential of human errors due to both of the positive and negative effect of the design changes in KNGR. This is a state of the art report on the human reliability analysis methods that are potentially available for the application to the KNGR design. We surveyed every technical aspects of existing HRA methods, and compared them in order to obtain the requirements for the assessment of human error potentials within KNGR design. We categorized the more than 10 methods into the first and the second generation according to the suggestion of Dr. Hollnagel. THERP was revisited in detail. ATHEANA proposed by US NRC for an advanced design and CREAM proposed by Dr. Hollnagel were reviewed and compared. We conclude that the key requirements might include the enhancement in the early steps for human error identification and the quantification steps with considerations of more extended error shaping factors over PSFs(performance shaping factors). The utilization of the steps and approaches of ATHEANA and CREAM will be beneficial to the attainment of an appropriate HRA method for KNGR. However, the steps and data from THERP will be still maintained because of the continuity with previous PSA activities in KNGR design

15. Reliability analysis of discrete event dynamic systems with Petri nets

International Nuclear Information System (INIS)

This paper deals with dynamic reliability of embedded systems. It presents a method for deriving feared scenarios (which might lead the system to a critical situation) in Petri nets. A classical way to obtain scenarios in Petri nets is to generate the reachability graph. However, for complex systems, it leads to the state space explosion. To avoid this problem, in our approach, Petri net reachability is translated into provability of linear logic sequents. Linear logic bases are introduced and used to formally define scenarios and minimality of scenarios. These definitions allow the method to produce only pertinent scenarios. The steps of the method are described and illustrated through a landing-gear system example.

16. Emergency diesel generator reliability analysis high flux isotope reactor

International Nuclear Information System (INIS)

A program to apply some of the techniques of reliability engineering to the High Flux Isotope Reactor (HFIR) was started on August 8, 1992. Part of the program was to track the conditional probabilities of the emergency diesel generators responding to a valid demand. This was done to determine if the performance of the emergency diesel generators (which are more than 25 years old) has deteriorated. The conditional probabilities of the diesel generators were computed and trended for the period from May 1990 to December 1992. The calculations indicate that the performance of the emergency diesel generators has not deteriorated in recent years, i.e., the conditional probabilities of the emergency diesel generators have been fairly stable over the last few years. This information will be one factor than may be considered in the decision to replace the emergency diesel generators

17. Semantic Web for Reliable Citation Analysis in Scholarly Publishing

OpenAIRE

Tous, Ruben; Guerrero, Manel; Delgado, Jaime

2012-01-01

Analysis of the impact of scholarly artifacts is constrained by current unreliable practices in cross-referencing, citation discovering, and citation indexing and analysis, which have not kept pace with the technological advances that are occurring in several areas like knowledge management and security. Because citation analysis has become the primary component in scholarly impact factor calculation, and considering the relevance of this metric within both the scholarly publishing value chai...

18. Markov Chains and reliability analysis for reinforced concrete structure service life

Scientific Electronic Library Online (English)

Edna, Possan; Jairo José de Oliveira, Andrade.

2014-06-01

Full Text Available From field studies and the literature, it was found that the degradation of concrete over time can be modelled probabilistically using homogeneous Markov Chains. To confirm this finding, this study presents an application of Markov Chains associated with the reliability analysis of experimental resu [...] lts of the degradation of concrete by chlorides. Experimental results were obtained for chloride penetration originating from non-accelerated tests in concretes in which the water/binder ratio was variable (0.40, 0.50 and 0.60) and that were produced with Pozzolanic Portland cement that was exposed for six months to the action of NaCl. Using a simulation process, the failure and safety probabilities were calculated by reliability and using Markov Chains, a service life project was estimated (a period of corrosion initiation). Compared to a concrete structure itself, the average error of service life predicted using Markov was approximately 14%. The results show a promissory methodology, in combination with the determination of concrete cover thickness, according to the required service life.

19. Analysis of structure reliability on beam using fuzzy finite element method

Directory of Open Access Journals (Sweden)

A. K. Ariffin

2013-03-01

Full Text Available The main requirement in designing a structure is to ensure the structure is reliable enough to withstand any loading. However, in the real world, for structural analysis, the presence of uncertainties in the input variable has reduced the accuracy of the calculated structural reliability. The purpose of this study is to determine the structural reliability with the consideration of uncertainties involved. The developed simulation method is the fuzzy set theory incorporating with the finite element methods followed with margin safety based on the yield strength of the structural reliability. This method is then used to analyze a given beam structure under loading for the material which are made from Aluminium 2024-T4. In this study, the modulus of section, s and loading, w are used as a fuzzy parameters. In conclusion, the combination of fuzzy set theory with the finite element method plays an important role in determining the structural reliability in the real world.

20. Using the HRA Calculator in Human Reliability Analysis done with Methodology Described in NUREG-1921

International Nuclear Information System (INIS)

The HRA Calculator is a tool designed to help to the preparation and documentation of the analysis of human reliability in the probabilistic safety analysis (APS) of fire. Collect the tasks required to develop and quantify the probabilities of error in accordance with the methodology described in the NUREG-1921. The HRA Calculator is a database that includes the tasks indicated in the NUREG-1921 for the analysis of human reliability. For the task of quantitative analysis the HRA Calculator includes several methods of quantification for human actions that are performed before and after an accident, one of the main advantages of the HRA Calculator is the systematization and standardization of the analysis of human reliability of fire. It is also a tool that allows the use of criteria more objective to define and quantify human actions, so that models collected, to the extent possible, the reality of the plant as is as operated. (Author)

1. Analysis methods for structure reliability of piping components

International Nuclear Information System (INIS)

In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

2. Reactor scram experience for shutdown system reliability analysis

International Nuclear Information System (INIS)

Scram experience in a number of operating light water reactors has been reviewed. The date and reactor power of each scram was compiled from monthly operating reports and personal communications with the operating plant personnel. The average scram frequency from ''significant'' power (defined as P/sub trip//P/sub max greater than/ approximately 20 percent) was determined as a function of operating life. This relationship was then used to estimate the total number of reactor trips from above approximately 20 percent of full power expected to occur during the life of a nuclear power plant. The shape of the scram frequency vs. operating life curve resembles a typical reliability bathtub curve (failure rate vs. time), but without a rising ''wearout'' phase due to the lack of operating data near the end of plant design life. For this case the failures are represented by ''bugs'' in the plant system design, construction, and operation which lead to scram. The number of scrams would appear to level out at an average of around three per year; the standard deviations from the mean value indicate an uncertainty of about 50 percent. The total number of scrams from significant power that could be expected in a plant designed for a 40-year life would be about 130 if no wearout phase develops near the end of life

3. The first Superphenix fuel load reliability analysis and validation

International Nuclear Information System (INIS)

The excellent behavior of PHENIX driver fuel and the burnup values currently reached suggest that the first SUPERPHENIX fuel load will meet the design lifetime. However, to ensure the reliability of the entire load, all the parameters affecting fuel behavior in reactor must be analyzed. For that purpose, we have taken into account all the results of the examination and verifications during the fabrication process of the first load subassemblies. These data concern geometrical parameters or oxide composition as well as the cladding tube and plug weld soundness tests. The objective is to determine the actual dispersion of all the parameters to ensure the absence of failure due to fabrication defects with very high statistical confidence limits. The influence of all the parameters has been investigated for the situations which can occur during power-up, steady-state operation and transients. The fabrication quality allows us to demonstrate that in all cases good behavior criteria for fuel and structure will be maintained. This demonstration is based on calculation code results as well as on validation by specific experiments

4. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

Science.gov (United States)

Putten, Jim Vander; Nolen, Amanda L.

2010-01-01

This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

5. Reliability analysis of an LCL tuned track segmented bi-directional inductive power transfer system

DEFF Research Database (Denmark)

Asif Iqbal, S. M.; Madawala, U. K.

2013-01-01

Bi-directional Inductive Power Transfer (BDIPT) technique is suitable for renewable energy based applications such as electric vehicles (EVs), for the implementation of vehicle-to-grid (V2G) systems. Recently, more efforts have been made by researchers to improve both efficiency and reliability of renewable energy systems to further enhance their economical sustainability. This paper presents a comparative reliability study between a typical BDIPT system and an individually controlled segmented BDIPT system. Steady state thermal simulation results are provided for different output power levels for a 1.5 kW BDIPT system in a MATLAB/Simulink environment. Reliability parameters such as failure rate and mean time between failures (MTBF) are compared between the two systems. A nonlinear programming (NP) model is developed for optimizing charging schedule for a stationery EV. A case study of EV optimum charging is provided for a 24 hours period indicating minimum cost and higher reliability.

6. Comparative analysis of haplotype association mapping algorithms

Directory of Open Access Journals (Sweden)

Pletcher Mathew T

2006-02-01

Full Text Available Abstract Background Finding the genetic causes of quantitative traits is a complex and difficult task. Classical methods for mapping quantitative trail loci (QTL in miceuse an F2 cross between two strains with substantially different phenotype and an interval mapping method to compute confidence intervals at each position in the genome. This process requires significant resources for breeding and genotyping, and the data generated are usually only applicable to one phenotype of interest. Recently, we reported the application of a haplotype association mapping method which utilizes dense genotyping data across a diverse panel of inbred mouse strains and a marker association algorithm that is independent of any specific phenotype. As the availability of genotyping data grows in size and density, analysis of these haplotype association mapping methods should be of increasing value to the statistical genetics community. Results We describe a detailed comparative analysis of variations on our marker association method. In particular, we describe the use of inferred haplotypes from adjacent SNPs, parametric and nonparametric statistics, and control of multiple testing error. These results show that nonparametric methods are slightly better in the test cases we study, although the choice of test statistic may often be dependent on the specific phenotype and haplotype structure being studied. The use of multi-SNP windows to infer local haplotype structure is critical to the use of a diverse panel of inbred strains for QTL mapping. Finally, because the marginal effect of any single gene in a complex disease is often relatively small, these methods require the use of sensitive methods for controlling family-wise error. We also report our initial application of this method to phenotypes cataloged in the Mouse Phenome Database. Conclusion The use of inbred strains of mice for QTL mapping has many advantages over traditional methods. However, there are also limitations in comparison to the traditional linkage analysis from F2 and RI lines. Application of these methods requires careful consideration of algorithmic choices based on both theoretical and practical factors. Our findings suggest general guidelines, though a complete evaluation of these methods can only be performed as more genetic data in complex diseases becomes available.

7. Comparative analysis of safety related site characteristics

Energy Technology Data Exchange (ETDEWEB)

2010-12-15

This document presents a comparative analysis of site characteristics related to long-term safety for the two candidate sites for a final repository for spent nuclear fuel in Forsmark (municipality of Oesthammar) and in Laxemar (municipality of Oskarshamn) from the point of view of site selection. The analyses are based on the updated site descriptions of Forsmark /SKB 2008a/ and Laxemar /SKB 2009a/, together with associated updated repository layouts and designs /SKB 2008b and SKB 2009b/. The basis for the comparison is thus two equally and thoroughly assessed sites. However, the analyses presented here are focussed on differences between the sites rather than evaluating them in absolute terms. The document serves as a basis for the site selection, from the perspective of long-term safety, in SKB's application for a final repository. A full evaluation of safety is made for a repository at the selected site in the safety assessment SR-Site /SKB 2011/, referred to as SR-Site main report in the following

8. Comparative analysis of safety related site characteristics

International Nuclear Information System (INIS)

This document presents a comparative analysis of site characteristics related to long-term safety for the two candidate sites for a final repository for spent nuclear fuel in Forsmark (municipality of Oesthammar) and in Laxemar (municipality of Oskarshamn) from the point of view of site selection. The analyses are based on the updated site descriptions of Forsmark /SKB 2008a/ and Laxemar /SKB 2009a/, together with associated updated repository layouts and designs /SKB 2008b and SKB 2009b/. The basis for the comparison is thus two equally and thoroughly assessed sites. However, the analyses presented here are focussed on differences between the sites rather than evaluating them in absolute terms. The document serves as a basis for the site selection, from the perspective of long-term safety, in SKB's application for a final repository. A full evaluation of safety is made for a repository at the selected site in the safety assessment SR-Site /SKB 2011/, referred to as SR-Site main report in the following

9. COMPARATIVE ANALYSIS ON COMPETITIVENESS: ROMANIA VS. BULGARIA

Directory of Open Access Journals (Sweden)

Bogdan-Daniel, FLOROIU

2014-11-01

Full Text Available The research developed in the present work is mainly aimed on comparative analysis of competitiveness in Romania and Bulgaria. We want to draw a warning, because the situation is even more worrying because, according to the latest official reports on global and regional competitiveness, Romania was exceeded by Bulgaria, which will have a negative impact on Romania's development in the medium and long term, unless urgent action is taken to redress situation. Although entry into the EU, competitiveness has become a national priority for both Romania and Bulgaria, this being transposed in national development programs and operational programs of the EU's competitiveness, the figures show that Romania had a very low absorption of European funds during 2007- 2013, two times less than Bulgaria absorbed. Target, set by the EU Strategy, is that by 2020, investment in research, development, innovation, made both government and the private sector, representing 2% or 1.5 % of GDP for Romania and Bulgaria. In these circumstances, Romania must develop macroeconomic policies to stimulate economic competitiveness, develop all regions, to attract foreign direct investment, external grants. Bulgaria has to face two major challenges: to accelerate the grow rate and to make it sustainable. Bulgaria has to ensure conditions for innovations and realization of human capital, including the development of the regions.

10. Independent Fiscal Institutions: A Comparative Analysis

Directory of Open Access Journals (Sweden)

Patrizia MAGARÒ

2013-06-01

Full Text Available The sovereign debt crisis and the new legal framework of European economic governance have forced most of the EU Countries to adopt stricter fiscal rules. In order to support budget decisions and fiscal policy choices on a strictly technical level, Independent Fiscal Institutions have often been set up in Europe. The present study aims to develop a comparative analysis of Independent Fiscal Institutions in order to better understand the role given to these public bodies in different countries. The effectiveness of so-called “fiscal watchdogs” depends on their independence and the reputation they are able to build. They are or become strong if their creation follows the path of the country’s constitutional traditions and there is compatibility with the specific political context. Taking into particular consideration some experiences (mostly European, the paper discusses the connections between introducing an Independent Fiscal Institution and reinforcing the activities of public policy evaluation, especially in Parliaments. The development of a culture of evaluation could in fact better ensure the accountability of Government, allowing legislative Assemblies to perform a more efficient oversight on all public policies, among which the most important is considered the fiscal one.

11. Network and Internetwork a compared Multiwavelength Analysis

CERN Document Server

Cauzzi, G; Falciani, R

2000-01-01

We analyze the temporal behavior of Network Bright Points (NBPs), present in the solar atmosphere, using a set of data acquired during coordinated observations between ground-based observatories (mainly at the NSO/Sacramento Peak) and the Michelson Doppler Interferometer onboard SOHO. We find that, at any time during the observational sequence, all the NBPs visible in the NaD2 images are co-spatial within 1 arcsec with locations of enhanced magnetic field. In analogy with the Ca II K line, the NaD2 line center emission can be used as a proxy for magnetic structures. We also compare the oscillation properties of NBPs and internetwork areas. At photospheric levels no differences between the two structures are found in power spectra, but analysis of phase and coherence spectra suggests the presence of downward propagating waves in the internetwork. At chromospheric levels some differences are evident in the power spectrum between NBPs and internetwork. The power spectrum of NBPs at the Halpha core wavelength sho...

12. A Comparative Analysis on Mining Frequent Itemsets

Directory of Open Access Journals (Sweden)

D.Kerana Hanirex

2012-12-01

Full Text Available Research on mining frequent itemsets is one of ` the emerging task in data mining.The purchasing of one product when another product is purchased represents an association rule. Association rules are useful for analyzing the customer behavior. It takes an important part in shopping basket data analysis, clustering. The FP-Growth and Apriori algorithm is the basic algorithm for mining association rules. This paper presents an efficient algorithm for mining frequent itemsets using Two Dimensional Transactions Reduction(TDTR approach which reduces the original database(D transactions to the reduced data base transactions D1 based on the min_sup count. Then for each item it finds the number of transactions that the item present and hence find the largest frequent itemset using the two dimensional approach. Using the largest item set property ,it finds the subset of frequent item sets. Thus TDTR approach reduces the number of scans in the database and hence improve the efficiency & accuracy by finding the number of association rules and reduces time to find the rules.This proposed approach compares the efficiency with traditional Apriori and FP-Growth algorithm.

13. Report on the analysis of field data relating to the reliability of solar hot water systems.

Energy Technology Data Exchange (ETDEWEB)

Menicucci, David F. (Building Specialists, Inc., Albuquerque, NM)

2011-07-01

Utilities are overseeing the installations of thousand of solar hot water (SHW) systems. Utility planners have begun to ask for quantitative measures of the expected lifetimes of these systems so that they can properly forecast their loads. This report, which augments a 2009 reliability analysis effort by Sandia National Laboratories (SNL), addresses this need. Additional reliability data have been collected, added to the existing database, and analyzed. The results are presented. Additionally, formal reliability theory is described, including the bathtub curve, which is the most common model to characterize the lifetime reliability character of systems, and for predicting failures in the field. Reliability theory is used to assess the SNL reliability database. This assessment shows that the database is heavily weighted with data that describe the reliability of SHW systems early in their lives, during the warranty period. But it contains few measured data to describe the ends of SHW systems lives. End-of-life data are the most critical ones to define sufficiently the reliability of SHW systems in order to answer the questions that the utilities pose. Several ideas are presented for collecting the required data, including photometric analysis of aerial photographs of installed collectors, statistical and neural network analysis of energy bills from solar homes, and the development of simple algorithms to allow conventional SHW controllers to announce system failures and record the details of the event, similar to how aircraft black box recorders perform. Some information is also presented about public expectations for the longevity of a SHW system, information that is useful in developing reliability goals.

14. An advanced human reliability analysis methodology: analysis of cognitive errors focused on

International Nuclear Information System (INIS)

The conventional Human Reliability Analysis (HRA) methods such as THERP/ASEP, HCR and SLIM has been criticised for their deficiency in analysing cognitive errors which occurs during operator's decision making process. In order to supplement the limitation of the conventional methods, an advanced HRA method, what is called the 2nd generation HRA method, including both qualitative analysis and quantitative assessment of cognitive errors has been being developed based on the state-of-the-art theory of cognitive systems engineering and error psychology. The method was developed on the basis of human decision-making model and the relation between the cognitive function and the performance influencing factors. The application of the proposed method to two emergency operation tasks is presented

15. An Implementation of Operational Experience Analysis for Addressing Human Reliability Analysis Issues in Nuclear Power Plants

International Nuclear Information System (INIS)

Human reliability analysis (HRA) is an integral part of probabilistic risk assessments (PRAs). Although various approaches and methods have been proposed since the first HRA was performed almost four decades ago, the technology associated with HRA is still not fully developed. The limitations of the existing HRA approaches become particularly apparent when the role of the human is examined in the context of nuclear power plants (NPPs). HRA approaches in the cognitive perspective try to take into consideration the operator, the system and their interactions. Cognitive models can help in analyzing human mental processes that can lead to error. This study documents the implementation of operating experience analysis in nuclear domains and describes the future improvement of HRA approaches. This review provides a summary of the HRA literature in order to the field of HRA approaches. Researchers may have knowledge of the capability of the tools and an understanding of their strengths and weaknesses in variety types of nuclear reactors

16. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

Science.gov (United States)

Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

2015-03-01

Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

17. Methodological Approach for Performing Human Reliability and Error Analysis in Railway Transportation System

Directory of Open Access Journals (Sweden)

Fabio De Felice

2011-10-01

Full Text Available Today, billions of dollars are being spent annually world wide to develop, manufacture, and operate transportation system such trains, ships, aircraft, and motor vehicles. Around 70 to 90 percent oftransportation crashes are, directly or indirectly, the result of human error. In fact, with the development of technology, system reliability has increased dramatically during the past decades, while human reliability has remained unchanged over the same period. Accordingly, human error is now considered as the most significant source of accidents or incidents in safety-critical systems. The aim of the paper is the proposal of a methodological approach to improve the transportation system reliability and in particular railway transportation system. The methodology presented is based on Failure Modes, Effects and Criticality Analysis (FMECA and Human Reliability Analysis (HRA.

18. An application of the fault tree analysis for the power system reliability estimation

International Nuclear Information System (INIS)

The power system is a complex system with its main function to produce, transfer and provide consumers with electrical energy. Combinations of failures of components in the system can result in a failure of power delivery to certain load points and in some cases in a full blackout of power system. The power system reliability directly affects safe and reliable operation of nuclear power plants because the loss of offsite power is a significant contributor to the core damage frequency in probabilistic safety assessments of nuclear power plants. The method, which is based on the integration of the fault tree analysis with the analysis of the power flows in the power system, was developed and implemented for power system reliability assessment. The main contributors to the power system reliability are identified, both quantitatively and qualitatively. (author)

19. Comparative Analysis of Virtual Education Applications

Directory of Open Access Journals (Sweden)

Mehmet KURT

2006-10-01

Full Text Available The research was conducted in order to make comparative analysis of virtual education applications. The research is conducted in survey model. The study group consists of total 300 institutes providing virtual education in the fall, spring and summer semesters of 2004; 246 in USA, 10 in Australia, 3 in South Africa, 10 in India, 21 in UK, 6 in Japan, 4 in Turkey. The information has been collected by online questionnaire sent to the target mass by e-mail. The questionnaire has been developed in two information categories as personal information and institutes and their virtual education applications. The English web design of the online questionnaire and the database has been prepared by Microsoft ASP codes which is the script language of Microsoft Front Page editor and has been tested on personal web site. The questionnaire has been pre applied in institutions providing virtual education in Australia. The English text of the questionnaire and web site design have been sent to educational technology and virtual education specialists in the countries of the study group. With the feedback received, the spelling mistakes have been corrected and concept and language validity have been completed. The application of the questionnaire has taken 40 weeks during March-November 2004. Only 135 institutes have replied. Two of the questionnaires have been discharged because they included mistaken coding, names of the institutions and countries. Valid 133 questionnaires cover approximately 44% of the study group. Questionnaires saved in the online database have been transferred to Microsoft Excel and then to SPSS by external database connection. In regards of the research objectives, the data collected has been analyzed on computer and by using SPSS statistics package program. In data analysis frequency (f, percentage (% and arithmetic mean ( have been used. In comparisons of country, institute, year, and other variables, che-square test, independent t-Test and one way variance analysis (F Test have been used. Kruskal-Wallis H test and Mann-Whitney U test have been used. Although virtual education applications differentiate in choices and applications in different countries, education levels and types, after completion of the data analysis it is seen that study group consists of people whom are graduate and undergraduate level, personal users having education expectations, between the ages of 18-45 and working full time. They mostly offer programs providing undergraduate and graduate education in social sciences, giving accredited document, certificate and title. It is seen that most of the instructors have taken a planned education and they are mostly working as full time instructors and they are taking technical support. Financial resources are obtained from fees taken from students and the resources are mostly used for personnel costs. In applications central administration and organization take place and it is seen that they interfere with universities, for physical facilities they use information process centers and virtual classrooms, and for infrastructure and support services they use information process services. It is seen that while in the teaching process they use both synchronous and asynchronous presentation technologies; in order to support course content they use e-mail, web, cd, and course book technologies to provide basic learning environment function; they prefer different environments to cover face to face education needs; they take self learning and collaboration as basis and they take projects and term paper evaluations serious; they mostly prefer multiple choice tests and they usually make virtual courses exams through the internet. Regarding the characteristics of their institutions’ applications, the study group have agreed on mostly to connection and being dependent on connection opportunities. A significant difference between their institutions’ characteristics and the model for developing computer labs, when they had started to provide virtual lessons and presentation technologies u

20. Signal Quality Outage Analysis for Ultra-Reliable Communications in Cellular Networks

DEFF Research Database (Denmark)

Gerardino, Guillermo Andrés Pocovi; Alvarez, Beatriz Soret; Lauridsen, Mads; Pedersen, Klaus I.; Mogensen, Preben Elgaard

2015-01-01

Ultra-reliable communications over wireless will open the possibility for a wide range of novel use cases and applications. In cellular networks, achieving reliable communication is challenging due to many factors, particularly the fading of the desired signal and the interference. In this regard, we investigate the potential of several techniques to combat these main threats. The analysis shows that traditional microscopic multiple-input multiple-output schemes with 2x2 or 4x4 antenna configura...

1. Reliability analysis of the Angra-1 safety electric bus bar, considering the new Diesel generators configuration

International Nuclear Information System (INIS)

Aiming the electrical system reliability improvement, the Angra-1 energy electric system has been modified. The two original Diesel generators were replaced by two new ones, and the former were configured as standby generators. The purpose of this work is to quantify the electric system reliability improvement under the mentioned modifications by using Markovian analysis. It was found that the new configuration for the emergency Diesel system improves significantly the power supply to the safety buses. (author). 8 refs., 3 figs., 5 tabs

2. Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)

Science.gov (United States)

Kauffmann, Paul J.

1994-01-01

The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.

3. Probabilistic risk assessment course documentation. Volume 5. System reliability and analysis techniques Session D - quantification

International Nuclear Information System (INIS)

This course in System Reliability and Analysis Techniques focuses on the probabilistic quantification of accident sequences and the link between accident sequences and consequences. Other sessions in this series focus on the quantification of system reliability and the development of event trees and fault trees. This course takes the viewpoint that event tree sequences or combinations of system failures and success are available and that Boolean equations for system fault trees have been developed and are available. 93 figs., 11 tabs

4. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

International Nuclear Information System (INIS)

The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

5. Comparative analysis of planetary laser ranging concepts

Science.gov (United States)

Dirkx, D.; Bauer, S.; Noomen, R.; Vermeersen, B. L. A.; Visser, P. N.

2014-12-01

Laser ranging is an emerging technology for tracking interplanetary missions, offering improved range accuracy and precision (mm-cm), compared to existing DSN tracking. The ground segment uses existing Satellite Laser Ranging (SLR) technology, whereas the space segment is modified with an active system. In a one-way system, such as that currently being used on the LRO spacecraft (Zuber et al., 2010), only an active detector is required on the spacecraft. For a two-way system, such as that tested by using the laser altimeter system on the MESSENGER spacecraft en route to Mercury (Smith et al., 2006), a laser transmitter system is additionally placed on the space segment, which will asynchronously fire laser pulses towards the ground stations. Although the one-way system requires less hardware, clock errors on both the space and ground segments will accumulate over time, polluting the range measurements. For a two-way system, the range measurements are only sensitive to clock errors integrated over the the two-way light time.We investigate the performance of both one- and two-way laser range systems by simulating their operation. We generate realizations of clock error time histories from Allan variance profiles, and use them to create range measurement error profiles. We subsequently perform the orbit determination process from this data to quanitfy the system's performance. For our simulations, we use two test cases: a lunar orbiter similar to LRO and a Phobos lander similar to the Phobos Laser Ranging concept (Turyshev et al., 2010). For the lunar orbiter, we include an empirical model for unmodelled non-gravitational accelerations in our truth model to include errors ihe dynamics. We include the estimation of clock parameters over a number of arc lengths for our simulations of the one-way range system and use a variety of state arc durations for the lunar orbiter simulations.We perform Monte Carlo simulations and generate true error distributions for both missions for various combinations of clock and state arc length. Thereby, we quantify the relative capabilities of the one- and two-way laser range systems. In addition, we study the optimal data analysis strategies for these missions, which we apply for LRO orbit determination. Finally, we compare the performance of the laser ranging systems with typical DSN tracking.

6. Problems Related to Use of Some Terms in System Reliability Analysis

Directory of Open Access Journals (Sweden)

2004-01-01

Full Text Available The paper deals with problems of using dependability terms, defined in actual standard STN IEC 50 (191: International electrotechnical dictionary, chap. 191: Dependability and quality of service (1993, in a technical systems dependability analysis. The goal of the paper is to find a relation between terms introduced in the mentioned standard and used in the technical systems dependability analysis and rules and practices used in a system analysis of the system theory. Description of a part of the system life cycle related to reliability is used as a starting point. The part of a system life cycle is described by the state diagram and reliability relevant therms are assigned.

7. Chang'E-1 satellite lunar orbital X-ray imaging analyzer responsible for the reliability prediction and analysis

International Nuclear Information System (INIS)

This paper introduce Chang'E-1 Satellite Lunar Orbital X-ray Imaging Analyzer responsible for the reliability prediction and analysis, include reliability diagram, elaborate on X-ray Imaging Analyzer's key items and single point failures mode. (authors)

8. Analysis of data reliability and stability in HR-SDN communication module

Science.gov (United States)

Choi, Dong-Hee; Shin, Jin-Chul; Park, Hong-Seong

2007-12-01

Profibus is open industrial communication system for wide range of applications in manufacturing and process automation. In Profibus, FDL service use to need hard real-time system. In these systems required data reliability and stability and real-time feature. Profibus fieldbus networks used in many industrial fields because of it supports real-time industrial communication. So we analyze of data reliability and stabilization in profibus network. In this paper, there was to a station for communication which uses FDL from in the communication module which is used a data transfer possibility at once, and from communication period (ex. 10ms) it analyzed the system effect which it follows in transmission lag occurrence element and a data transfer error ratio it analyzed. Like this analytical result it led and there were from transmission for reliability and data stability they confirmed to HR-SDN communication modules and a guarantee yes or no. In this paper, we try to analysis of transmission delay ability for satisfaction data reliability and stability in specific system, which requested real-time feature. And, we analysis system reconstruction time and data delay time according to data/token packet loss. Packet-error occur physical layer in Profibus. As a result of above analysis, we propose method of enhancement of reliability in system which requested system reliability and stability. And, we confirm proposed method.

9. Comparative analysis of pharmacophore screening tools.

Science.gov (United States)

Sanders, Marijn P A; Barbosa, Arménio J M; Zarzycka, Barbara; Nicolaes, Gerry A F; Klomp, Jan P G; de Vlieg, Jacob; Del Rio, Alberto

2012-06-25

The pharmacophore concept is of central importance in computer-aided drug design (CADD) mainly because of its successful application in medicinal chemistry and, in particular, high-throughput virtual screening (HTVS). The simplicity of the pharmacophore definition enables the complexity of molecular interactions between ligand and receptor to be reduced to a handful set of features. With many pharmacophore screening softwares available, it is of the utmost interest to explore the behavior of these tools when applied to different biological systems. In this work, we present a comparative analysis of eight pharmacophore screening algorithms (Catalyst, Unity, LigandScout, Phase, Pharao, MOE, Pharmer, and POT) for their use in typical HTVS campaigns against four different biological targets by using default settings. The results herein presented show how the performance of each pharmacophore screening tool might be specifically related to factors such as the characteristics of the binding pocket, the use of specific pharmacophore features, and the use of these techniques in specific steps/contexts of the drug discovery pipeline. Algorithms with rmsd-based scoring functions are able to predict more compound poses correctly as overlay-based scoring functions. However, the ratio of correctly predicted compound poses versus incorrectly predicted poses is better for overlay-based scoring functions that also ensure better performances in compound library enrichments. While the ensemble of these observations can be used to choose the most appropriate class of algorithm for specific virtual screening projects, we remarked that pharmacophore algorithms are often equally good, and in this respect, we also analyzed how pharmacophore algorithms can be combined together in order to increase the success of hit compound identification. This study provides a valuable benchmark set for further developments in the field of pharmacophore search algorithms, e.g., by using pose predictions and compound library enrichment criteria. PMID:22646988

10. AUDITOR ROTATION - A CRITICAL AND COMPARATIVE ANALYSIS

Directory of Open Access Journals (Sweden)

Mocanu Mihaela

2011-12-01

Full Text Available The present paper starts out from the challenge regarding auditor tenure launched in 2010 by the Green Paper of the European Commission Audit Policy: Lessons from the Crisis. According to this document, the European Commission speaks both in favor of the mandatory rotation of the audit firm, and in favor of the mandatory rotation of audit partners. Rotation is considered a solution to mitigate threats to independence generated by familiarity, intimidation and self-interest in the context of a long-term audit-client relationship. At international level, there are several studies on auditor rotation, both empirical (e.g. Lu and Sivaramakrishnan, 2009, Li, 2010, Kaplan and Mauldin, 2008, Jackson et al., 2008 and normative in nature (e.g. Marten et al., 2007, Muller, 2006 and Gelter, 2004. The objective of the present paper is to perform a critical and comparative analysis of the regulations on internal and external rotation in force at international level, in the European Union and in the United States of America. Moreover, arguments both in favor and against mandatory rotation are brought into discussion. With regard to the research design, the paper has a normative approach. The main findings are first of all that by comparison, all regulatory authorities require internal rotation at least in the case of public interest entities, while the external rotation is not in the focus of the regulators. In general, the most strict and detailed requirements are those issued by the Securities and Exchange Commission from the United States of America. Second of all, in favor of mandatory rotation speaks the fact that the auditor becomes less resilient in case of divergence of opinions between him and company management, less stimulated to follow his own interest, and more scrupulous in conducting the audit. However, mandatory rotation may also have negative consequences, thus the debate on the opportunity of this regulatory measure remains open-ended.

11. Analysis of complete logical structures in system reliability assessment

International Nuclear Information System (INIS)

The application field of the fault-tree techniques has been explored in order to assess whether the AND-OR structures covered all possible actual binary systems. This resulted in the identification of various situations requiring the complete AND-OR-NOT structures for their analysis. We do not use the term non-coherent for such cases, since the monotonicity or not of a structure function is not a characteristic of a system, but of the particular top event being examined. The report presents different examples of complete fault-trees, which can be examined according to different degrees of approximation. In fact, the exact analysis for the determination of the smallest irredundant bases is very time consuming and actually necessary only in some particular cases (multi-state systems, incidental situations). Therefore, together with the exact procedure, the report shows two different methods of logical analysis that permit the reduction of complete fault-trees to AND-OR structures. Moreover, it discusses the problems concerning the evaluation of the probability distribution of the time to first top event occurrence, once the hypothesis of structure function monotonicity is removed

12. A comparative study of computed radiographic cephalometry and conventional cephalometry in reliability of head film measurements

International Nuclear Information System (INIS)

The purpose of this study was to compare and to find out the variability of head film measurements (landmarks identification) between Fuji computed radiographic cephalometry and conventional cephalometry. 28 Korean adults were selected. Lateral cephalometric FCR film and conventional cephalometric film of each subject was taken. Four investigators identified 24 cephalometric landmarks on lateral cephalometric FCR film and conventional cephalometric film were statistically analysed. The results were as follows : 1. In FCR film and conventional film, coefficient of variation (C.V.) of 24 landmarks was taken horizontally and vertically. 2. In comparison of significant differences of landmarks variability between FCR film and conventional film, horizontal l value of coefficient of variation showed significant differences in four landmarks among twenty-four landmarks, but vertical a value of coefficient of variation showed significant differences in sixteen landmarks among twenty-four landmarks. FCR film showed significantly less variability than conventional film in 17 subjects among 20 (4+16) subjects that sho wed significant difference.

13. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

International Nuclear Information System (INIS)

This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been performed. The results given by ASAP with those obtained using the classical methods have been compared, showing a good agreement but with the advantage of computational time in the case of ASAP. (orig.)

14. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

Science.gov (United States)

2014-06-01

In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in different design scenarios. Finally, the reliability index values are obtained for the entire support structure in different design scenarios. The results of the work demonstrates that proposed reliability evaluation method of tunnel support system is effective not only for investigating the reliability of individual elements in the structure, but also for building an overall estimation about reliability performance of the entire tunnel structure.

15. Determination of Strength for Reliability Analysis of Multilayer Ceramic Capacitors

International Nuclear Information System (INIS)

A NanoindenterTM equipped with a Vickers indenter was used to measure fracture toughness of Multilayer Capacitors (MLCs) and BaTiO3 blanks. Strength of blanks of 6.3 x 4.7 x 1.1 mm3 was measured by performing three-point flexure using a 4 mm support span. The size of the strength limiting pores in the flexure tests was compared to pore sizes measured on polished MLC cross sections, and it was found that much larger pores were present in the 3-point flexure specimens. Strength distributions for the MLCs were generated using the measured fracture toughness values, assuming the measured pores or second phase inclusions were strength limiting

16. Modelling of nuclear power plant control and instrumentation elements for automatic disturbance and reliability analysis

International Nuclear Information System (INIS)

Present Final Report summarizes results of R/D work done within IAEA-VEIKI (Institute for Electrical Power Research, Budapest, Hungary) Research Contract No. 3210 during 3 years' period of 01.08.1982 - 31.08.1985. Chapter 1 lists main research objectives of the project. Main results obtained are summarized in Chapters 2 and 3. Outcomes from development of failure modelling methodologies and their application for C/I components of WWER-440 units are as follows (Chapter 2): improvement of available ''failure mode and effect analysis'' methods and mini-fault tree structures usable for automatic disturbance (DAS) and reliability (RAS) analysis; general classification and determination of functional failure modes of WWER-440 NPP C/I components; set up of logic models for motor operated control valves and rod control/drive mechanism. Results of development of methods and their application for reliability modelling of NPP components and systems cover (Chapter 3): development of an algorithm (computer code COMPREL) for component-related failure and reliability parameter calculation; reliability analysis of PAKS II NPP diesel system; definition of functional requirements for reliability data bank (RDB) in WWER-440 units. Determination of RDB input/output data structure and data manipulation services. Methods used are a-priori failure mode and effect analysis, combined fault tree/event tree modelling technique, structural computer programming, probability theory application to nuclear field

17. Analysis of the WWER-440 pressure control system and the core thermal reliability

International Nuclear Information System (INIS)

Pressure deviations affect significanty the reactor core thermal reliability and it is of practical interest to analyze the reasons for these deviations. The paper presents a fault tree analysis of the pressurizer system during normal operation and a thermal reliability analysis of the reactor core. The TENAZ code is used for thermal reliability analysis assuming maximal deviation of 1% in the core inlet temperature, 4% in the local thermal power density, 20% in the reactor power output and 5% in the effect of corrosion products on the CHF ratio. Calculation results illustrate the dependence of the amplifying factors on the heat flux when various CHF correlations are used, and the fault probability vs the maximal heat flux at various deviations in the core inlet pressure

18. Fault tree based reliability analysis for digital reactor power control system of nuclear power plant

International Nuclear Information System (INIS)

Fault tree method is used for reliability analysis for reactor power control system, including uncertainty analysis and sensitive analysis. 'loss of regulation accident' top event and 'loss of effective control' top event are defined, corresponding fault tree are constructed, and contribution of hardware failure and software failure to system safety are calculated. It points out that common mode failures of software, actuator, sensor and operator's response have significant influence to system reliability in 'loss of regulation accident'; software common mode failure and operators' response have significant contribution to system reliability in 'loss of efficient control accident'. (authors)

19. Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE

Science.gov (United States)

Wu, N. Eva

2001-01-01

This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.

20. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

CERN Document Server

Zio, Enrico

2013-01-01

Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

1. Reliability and life-cycle analysis of deteriorating systems

CERN Document Server

Sánchez-Silva, Mauricio

2016-01-01

This book compiles and critically discusses modern engineering system degradation models and their impact on engineering decisions. In particular, the authors focus on modeling the uncertain nature of degradation considering both conceptual discussions and formal mathematical formulations. It also describes the basics concepts and the various modeling aspects of life-cycle analysis (LCA).  It highlights the role of degradation in LCA and defines optimum design and operation parameters. Given the relationship between operational decisions and the performance of the system’s condition over time, maintenance models are also discussed. The concepts and models presented have applications in a large variety of engineering fields such as Civil, Environmental, Industrial, Electrical and Mechanical engineering. However, special emphasis is given to problems related to large infrastructure systems. The book is intended to be used both as a reference resource for researchers and practitioners and as an academic text ...

2. Using Information from Operating Experience to Inform Human Reliability Analysis

Energy Technology Data Exchange (ETDEWEB)

Bruce P. Hallbert; David I. Gertman; Julie Marble; Erasmia Lois; Nathan Siu

2004-06-01

This paper reports on efforts being sponsored by the U.S. NRC and performed by INEEL to develop a technical basis and perform work to extract information from sources for use in HRA. The objectives of this work are to: 1) develop a method for conducting risk-informed event analysis of human performance information that stems from operating experience at nuclear power plants and for compiling and documenting the results in a structured manner; 2) provide information from these analyses for use in risk-informed and performance-based regulatory activities; 3) create methods for information extraction and a repository for this information that, likewise, support HRA methods and their applications.

3. Stochastic Response and Reliability Analysis of Hysteretic Structures

DEFF Research Database (Denmark)

MØrk, Kim JØrgensen

1989-01-01

During the last 30 years response analysis of structures under random excitation has been studied in detail. These studies are motivated by the fact that most of natures excitations, such as earthquakes, wind and wave loads exhibit randomly fluctuating characters. For safety reasons this randomness must be considered by the designers of structures like tall buildings, off-shore structures,ships etc. The response of a structure is generally uncertain due to the uncertainty of the geometrical and physical parameters determining the system, the uncertainty of the excitation and the imperfections of the adapted mathematical model from which the structural response is determined. In general emphasis will be placed on applications of the various methods introduced rather on questions concerning the excistence and uniqueness of solutions.

4. IRRAS, Integrated Reliability and Risk Analysis System for PC

International Nuclear Information System (INIS)

1 - Description of program or function: IRRAS4.16 is a program developed for the purpose of performing those functions necessary to create and analyze a complete Probabilistic Risk Assessment (PRA). This program includes functions to allow the user to create event trees and fault trees, to define accident sequences and basic event failure data, to solve system and accident sequence fault trees, to quantify cut sets, and to perform uncertainty analysis on the results. Also included in this program are features to allow the analyst to generate reports and displays that can be used to document the results of an analysis. Since this software is a very detailed technical tool, the user of this program should be familiar with PRA concepts and the methods used to perform these analyses. 2 - Method of solution: IRRAS4.16 is written entirely in MODULA-2 and uses an integrated commercial graphics package to interactively construct and edit fault trees. The fault tree solving methods used are industry recognized top down algorithms. For quantification, the program uses standard methods to propagate the failure information through the generated cut sets. 3 - Restrictions on the complexity of the problem: Due to the complexity of and the variety of ways a fault tree can be defined it is difficult to define limits on the complexity of the problem solved by this software. It is, however, capable of solving a substantial fault tree due to efficient methods. At this time, the software can efficiently solve problems as large as other software currently used on mainframe computers. Does not include source code

5. Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study

Energy Technology Data Exchange (ETDEWEB)

Ronald L. Boring; John A. Forester; Andreas Bye; Vinh N. Dang; Erasmia Lois

2010-06-01

The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to “translate” the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies.

6. The Reliability and Maintainability Analysis of Pneumatic System of Rotary Drilling Machines

Science.gov (United States)

2013-10-01

In any blasthole drilling the bottom of the blasthole must be kept clean by evacuating drill cuttings or it flushing as soon as they appear to ensure efficient drilling. If it is not done well, a large quantity of energy will be consumed in regrinding with the consequent wear on drill bit and decrease penetration, apart from the risk of jamming. Therefore, research on reliability and probability of safe operation of pneumatic system of drilling machines is of prime importance to ensure safe drilling operations. In this paper, reliability of this system was modeled and analyzed. To doing this research, drilling machines in Sarcheshmeh Copper Mine in Iran have been selected for data collection and analysis. After reliability modeling of pneumatic system, maintenance scheduling has been presented based on different reliability levels. There were four rotary drilling machines in this mine (named A, B, C and D). Results showed that after about 7 h drilling of machines A and B, and after 103 and 44 h drilling of machines C and D respectively, noticeably the reliability of pneumatic system reached to 80 %. As a result, machines C and D have more reliable pneumatic systems in comparison to machines A and B and checking and servicing of pneumatic system before these time was essential. Also, maintainability analysis showed that more failures of pneumatic system of machines A, B, C and D will be noticeably repaired at about 28, 34, 6 and 9 h.

7. Reliability Modeling and Structure Importance Analysis of Electric Power Station Distribution Control System

Directory of Open Access Journals (Sweden)

Jinlei Qin

2014-01-01

Full Text Available Aiming to the complexity of reliability model of Distributed Control System (DCS, a novel reliability modeling method has been put forward in this study. The method can be employed in many situations, especially in the case of DCS of electric power stations. The traditional reliability principles and the relative definitions of coherent system have been introduced as the foundations. Based on the structural and functional analysis of classical DCS, the overall structure of DCS which are normally seen as a repairable system in the real industrial productive process can been modeled as an un-repairable system according to some proposed approximating conditions. As an example, one whole DCS and its subsystems have been tackled to calculate their reliability indices. At the same time, one type of structure importance of DCS has also been computed based on the model. The values are of significant directive functions for the maintainance schemes of DCS.

8. Signal Quality Outage Analysis for Ultra-Reliable Communications in Cellular Networks

DEFF Research Database (Denmark)

Gerardino, Guillermo Andrés Pocovi; Alvarez, Beatriz Soret

2015-01-01

Ultra-reliable communications over wireless will open the possibility for a wide range of novel use cases and applications. In cellular networks, achieving reliable communication is challenging due to many factors, particularly the fading of the desired signal and the interference. In this regard, we investigate the potential of several techniques to combat these main threats. The analysis shows that traditional microscopic multiple-input multiple-output schemes with 2x2 or 4x4 antenna configurations are not enough to fulfil stringent reliability requirements. It is revealed how such antenna schemes must be complemented with macroscopic diversity as well as interference management techniques in order to ensure the necessary SINR outage performance. Based on the obtained performance results, it is discussed which of the feasible options fulfilling the ultra-reliable criteria are most promising in a practical setting, as well as pointers to supplementary techniques that should be included in future studies.

9. RELIABILITY ANALYSIS OF THERMAL POWER GENERATING UNITS BASED ON WORKING HOURS

Directory of Open Access Journals (Sweden)

C.P.S. Hungund

2014-06-01

Full Text Available For any system reliability calculation, estimation plays a vital role and hence the analysis is to be diverted to calculate reliabilities of the units. To do this, we need to analyze failure rates or inter-failure time intervals or duration of the failure free operation of the units. This data is usually termed as Working Hours (successive failures. Working Hours implies that the gap between failure to failure which is directly linked with the reliability of the system. Hence in this paper, first we fit and test the suitability of the data using Exponential and Weibull distribution through chi-square Goodness of Fit (GoF test. In the later part, reliabilities of different units are calculated using the same distributions. Conclusions are drawn based on the results obtained.

10. Reliability Calculations

DEFF Research Database (Denmark)

Petersen, Kurt Erling

1986-01-01

Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the...

11. Human Capital Development: Comparative Analysis of BRICs

Science.gov (United States)

Ardichvili, Alexandre; Zavyalova, Elena; Minina, Vera

2012-01-01

Purpose: The goal of this article is to conduct macro-level analysis of human capital (HC) development strategies, pursued by four countries commonly referred to as BRICs (Brazil, Russia, India, and China). Design/methodology/approach: This analysis is based on comparisons of macro indices of human capital and innovativeness of the economy and a…

12. INTER-RATER RELIABILITY FOR MOVEMENT PATTERN ANALYSIS (MPA: MEASURING PATTERNING OF BEHAVIORS VERSUS DISCRETE BEHAVIOR COUNTS AS INDICATORS OF DECISION-MAKING STYLE

Directory of Open Access Journals (Sweden)

RichardRende

2014-06-01

Full Text Available The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic – the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts – and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from Movement Pattern Analysis (MPA, an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective, inter-rater reliability for patterning (proportional indicators of each factor was significantly higher and excellent (ICC = .89. Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring discrete behavioral counts versus patterning of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns.

13. Development of a Computerized Number Sense Scale for 3-rd Graders: Reliability and Validity Analysis

Directory of Open Access Journals (Sweden)

Der-Ching Yang

2008-07-01

Full Text Available This study was to develop a computerized number sense scale (CNST to assess the performance of students who had already completed the 3rd-grade mathematics curriculum. In total, 808 students from representative elementary schools, including cities, country and rural areas of Taiwan, participated in this study. The results of statistical analyses and content analysis indicated that this computerized number sense scale demonstrates good reliability and validity. Cronbach’s ? coefficient of the scale was .8526 and its construct reliability was .805. In addition, the 5-factor number sense model was empirically and theoretically supported via confirmatory factor analysis and literature review.

14. A Survey of Design Model for Quality Analysis: From a Performance and Reliability Perspective

OpenAIRE

M. A. Isa; Mohd Z. M. Zaki; Dayang N.A. Jawawi

2013-01-01

The use of a model for the analysis of the software quality attributes during the design phase has been gaining more attention in recent years. These models, which are peripheral in system design, are the center of quality analysis. The system design is the central focus in representing the structure and behavior of the system. Therefore, the goal of the software architecture performance and reliability analysis is to discover possible quality problems that may violate the quality requirement...

15. Comparative Analysis of Competitive Strategy Implementation

OpenAIRE

Maina A. S. Waweru

2011-01-01

This paper presents research findings on Competitive Strategy Implementation which compared the levels of strategy implementation achieved by different generic strategy groups, comprising firms inclined towards low cost leadership, differentiation or dual strategic advantage.  The study sought to determine the preferences for use of implementation armaments and compared how such armaments related to the level of implementation achieved.   Respondents comprised 71 top executives from 59 compan...

16. Comparative analysis of Indonesian and Korean governance

OpenAIRE

Hwang, Yunwon

2011-01-01

This paper overviews governance issues in Indonesia and Korea from a comparative perspective. To do so, the WGI (World Governance Index) developed by the World Bank is employed for a more objective and consistent comparison between the two countries. WGI consists of six dimensions of voice and accountability, political stability and absence of violence, government effectiveness, regulatory quality, rule of law, and control of corruption. The two countries are analyzed and compared by ea...

17. Comparative analysis of enterprise risk management models ????????????? ?????? ??????? ?????????? ??????? ???????????

Directory of Open Access Journals (Sweden)

Nikolaev Igor V.

2012-08-01

Full Text Available The article is devoted to the analysis and the comparison of modern enterprise risk management models used in domestic and world practice. Some thesis to build such a model are proposed.?????? ????????? ??????? ? ????????? ??????????? ??????? ?????????? ??????? ???????????, ??????? ???????????? ? ????????????? ? ?????????? ????????. ?????????? ????????? ?????????, ?? ??????? ?????? ???????????? ????? ??????.

18. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Integrated Reliability and Risk Analysis System (IRRAS) reference manual. Volume 2

International Nuclear Information System (INIS)

The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the use the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification to report generation. Version 1.0 of the IRRAS program was released in February of 1987. Since then, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 5.0 and is the subject of this Reference Manual. Version 5.0 of IRRAS provides the same capabilities as earlier versions and ads the ability to perform location transformations, seismic analysis, and provides enhancements to the user interface as well as improved algorithm performance. Additionally, version 5.0 contains new alphanumeric fault tree and event used for event tree rules, recovery rules, and end state partitioning

19. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Integrated Reliability and Risk Analysis System (IRRAS) tutorial manual. Volume 3

International Nuclear Information System (INIS)

The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume is the tutorial manual for the Integrated Reliability and Risk Analysis System (IRRAS) Version 5.0, a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. A series of lessons is provided that guides the user through basic steps common to most analyses performed with IRRAS. The tutorial is divided into two major sections: basic and additional features. The basic section contains lessons that lead the student through development of a very simple problem in IRRAS, highlighting the program's most basic features. The additional features section contains lessons that expand on basic analysis features of IRRAS 5.0

20. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

Science.gov (United States)

Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.

2014-01-01

Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

1. Merozoite surface protein-3? is a reliable marker for population genetic analysis of Plasmodium vivax

Directory of Open Access Journals (Sweden)

Zakeri Sedigheh

2006-07-01

Full Text Available Abstract Background The knowledge on population structure of the parasite isolates has contributed greatly to understanding the dynamics of the disease transmission for designing and evaluating malaria vaccines as well as for drug applications. msp-1 and msp-3? genes have been used as a genetic marker in population studies of Plasmodium vivax isolates. In this study, msp-3? was compared and assessed with msp-1 marker in order to find whether msp-3? is a reliable genetic marker for P. vivax population studies. Methods This comparative study was designed and carried out as the first assessment of diversity in Pvmsp-3? gene by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP in the 50 northern and 94 southern P. vivax isolates from Iran, which had been analysed before for msp-1 gene. Results Three allele size as, Type A (1.8 kb, Type B (1.5 kb and Type C (1.2 kb have been detected among both northern and southern isolates based on PCR results. Type C (70% and Type A (68.7% were the predominant fragments among northern and southern parasites, respectively. 99 distinct Pvmsp-3? fragments defined by the size were detected in the 94 southern samples by PCR analysis. However, no mixed genotype infections have been detected among northern isolates. Based on restriction pattern from digestion with Hha I and Alu I 12 and 49 distinct allelic variants have been detected among 50 northern and 94 southern isolates. However, based on msp-1 gene, 30 distinct variants identified in all 146-sequenced Iranian P. vivax isolate. Conclusion The results suggested that PCR-RFLP on msp-3? gene is an adequate, applicable and easily used technique for molecular epidemiology studies of P. vivax isolates without the need for further sequencing analysis.

2. An Efficient Approach for the Reliability Analysis of Phased-Mission Systems with Dependent Failures

Science.gov (United States)

Xing, Liudong; Meshkat, Leila; Donahue, Susan K.

2006-01-01

We consider the reliability analysis of phased-mission systems with common-cause failures in this paper. Phased-mission systems (PMS) are systems supporting missions characterized by multiple, consecutive, and nonoverlapping phases of operation. System components may be subject to different stresses as well as different reliability requirements throughout the course of the mission. As a result, component behavior and relationships may need to be modeled differently from phase to phase when performing a system-level reliability analysis. This consideration poses unique challenges to existing analysis methods. The challenges increase when common-cause failures (CCF) are incorporated in the model. CCF are multiple dependent component failures within a system that are a direct result of a shared root cause, such as sabotage, flood, earthquake, power outage, or human errors. It has been shown by many reliability studies that CCF tend to increase a system's joint failure probabilities and thus contribute significantly to the overall unreliability of systems subject to CCF.We propose a separable phase-modular approach to the reliability analysis of phased-mission systems with dependent common-cause failures as one way to meet the above challenges in an efficient and elegant manner. Our methodology is twofold: first, we separate the effects of CCF from the PMS analysis using the total probability theorem and the common-cause event space developed based on the elementary common-causes; next, we apply an efficient phase-modular approach to analyze the reliability of the PMS. The phase-modular approach employs both combinatorial binary decision diagram and Markov-chain solution methods as appropriate. We provide an example of a reliability analysis of a PMS with both static and dynamic phases as well as CCF as an illustration of our proposed approach. The example is based on information extracted from a Mars orbiter project. The reliability model for this orbiter considers the various phases of Launch, Cruise, Mars Orbit Insertion, and Orbit. Some of the CCF for the orbiter in this mission include environmental effects, such as micrometeoroids, human operator errors, and software errors.

3. Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

Science.gov (United States)

Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick

2009-01-01

The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts as well as performing major probabilistic assessments used to support flight rationale and help establish program requirements. During 2008, the Analysis Group performed more than 70 assessments. Although all these assessments were important, some were instrumental in the decisionmaking processes for the Shuttle and Constellation Programs. Two of the more significant tasks were the Space Transportation System (STS)-122 Low Level Cutoff PRA for the SSP and the Orion Pad Abort One (PA-1) PRA for the CxP. These two activities, along with the numerous other tasks the Analysis Group performed in 2008, are summarized in this report. This report also highlights several ongoing and upcoming efforts to provide crucial statistical and probabilistic assessments, such as the Extravehicular Activity (EVA) PRA for the Hubble Space Telescope service mission and the first fully integrated PRAs for the CxP's Lunar Sortie and ISS missions.

4. The Yale-Brown Obsessive Compulsive Scale: A Reliability Generalization Meta-Analysis.

Science.gov (United States)

López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Maria; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa

2015-10-01

The Yale-Brown Obsessive Compulsive Scale (Y-BOCS) is the most frequently applied test to assess obsessive compulsive symptoms. We conducted a reliability generalization meta-analysis on the Y-BOCS to estimate the average reliability, examine the variability among the reliability estimates, search for moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the Y-BOCS. We included studies where the Y-BOCS was applied to a sample of adults and reliability estimate was reported. Out of the 11,490 references located, 144 studies met the selection criteria. For the total scale, the mean reliability was 0.866 for coefficients alpha, 0.848 for test-retest correlations, and 0.922 for intraclass correlations. The moderator analyses led to a predictive model where the standard deviation of the total test and the target population (clinical vs. nonclinical) explained 38.6% of the total variability among coefficients alpha. Finally, clinical implications of the results are discussed. PMID:25268017

5. Markov chain modelling of reliability analysis and prediction under mixed mode loading

Science.gov (United States)

Singh, Salvinder; Abdullah, Shahrum; Nik Mohamed, Nik Abdullah; Mohd Noorani, Mohd Salmi

2015-03-01

The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

6. Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report

Energy Technology Data Exchange (ETDEWEB)

Authen, S.; Larsson, J. (Risk Pilot AB, Stockholm (Sweden)); Bjoerkman, K.; Holmberg, J.-E. (VTT, Helsingfors (Finland))

2010-12-15

Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)

7. Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report

International Nuclear Information System (INIS)

Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)

8. Wellness Model of Supervision: A Comparative Analysis

Science.gov (United States)

Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

2012-01-01

This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

Directory of Open Access Journals (Sweden)

Noureldin Mohamed Abdelaal

2014-12-01

10. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

International Nuclear Information System (INIS)

This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual's performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average

11. Comparative Distributions of Hazard Modeling Analysis

Directory of Open Access Journals (Sweden)

Rana Abdul Wajid

2006-07-01

Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

12. A probabilistic capacity spectrum strategy for the reliability analysis of bridge pile shafts considering soil structure interaction

Scientific Electronic Library Online (English)

Dookie, Kim; Sandeep, Chaudhary; Charito Fe, Nocete; Feng, Wang; Do Hyung, Lee.

2011-09-01

Full Text Available This paper presents a probabilistic capacity spectrum strategy for the reliability analysis of a bridge pile shaft, accounting for uncertainties in design factors in the analysis and the soil-structure interaction (SSI). Monte Carlo simulation method (MCS) is adopted to determine the probabilities o [...] f failure by comparing the responses with defined limit states. The analysis considers the soil structure interaction together with the probabilistic application of the capacity spectrum method for different types of limit states. A cast-in-drilledhole (CIDH) extended reinforced concrete pile shaft of a bridge is analysed using the proposed strategy. The results of the analysis show that the SSI can lead to increase or decrease of the structure's probability of failure depending on the definition of the limit states.

13. Software Reliability Growth Model with Logistic- Exponential Testing-Effort Function and Analysis of Software Release Policy

OpenAIRE

2011-01-01

Software reliability is one of the important factors of software quality. Before software delivered in to market it is thoroughly checked and errors are removed. Every software industry wants to develop software that should be error free. Software reliability growth models are helping the software industries to develop software which is error free and reliable. In this paper an analysis is done based on incorporating the logistic-exponential testing-effort in to NHPP Software reliability grow...

14. Human reliability analysis of the Tehran research reactor using the SPAR-H method

Directory of Open Access Journals (Sweden)

Barati Ramin

2012-01-01

Full Text Available The purpose of this paper is to cover human reliability analysis of the Tehran research reactor using an appropriate method for the representation of human failure probabilities. In the present work, the technique for human error rate prediction and standardized plant analysis risk-human reliability methods have been utilized to quantify different categories of human errors, applied extensively to nuclear power plants. Human reliability analysis is, indeed, an integral and significant part of probabilistic safety analysis studies, without it probabilistic safety analysis would not be a systematic and complete representation of actual plant risks. In addition, possible human errors in research reactors constitute a significant part of the associated risk of such installations and including them in a probabilistic safety analysis for such facilities is a complicated issue. Standardized plant analysis risk-human can be used to address these concerns; it is a well-documented and systematic human reliability analysis system with tables for human performance choices prepared in consultation with experts in the domain. In this method, performance shaping factors are selected via tables, human action dependencies are accounted for, and the method is well designed for the intended use. In this study, in consultations with reactor operators, human errors are identified and adequate performance shaping factors are assigned to produce proper human failure probabilities. Our importance analysis has revealed that human action contained in the possibility of an external object falling on the reactor core are the most significant human errors concerning the Tehran research reactor to be considered in reactor emergency operating procedures and operator training programs aimed at improving reactor safety.

15. Resonance frequency analysis-reliability in third generation instruments: Osstell mentor®

OpenAIRE

Herrero-Climent, Mariano; Albertini, Matteo; Rios-Santos, Jose V.; Lázaro-Calvo, Pedro; Fernández-Palacín, Ana; Bullon, Pedro

2012-01-01

Few studies assess repeatability and reproducibility in registers of resonance frequency analysis (a value of dental implant stability). Objective: Few studies assess repeatability and reproducibility in resonance frequency analyses (implant stability evaluation). This study is aimed at assessing reliability (repeatabilty and reproducibility) in the Osstell Mentor® system using the intraclass correlation coefficient (ICC) as the statistical method. Study Design: ISQ measurements of RFA ...

16. Reliability analysis of earth dams : case of El Houareb dam, Kairouan, Tunisia

Energy Technology Data Exchange (ETDEWEB)

Kheder, K. [Inst. Superieur des Etudes Technologiques de Nabeul (Tunisia); M' Rabet, Z. [Geo-Risk Consulting, New York, NY (United States); Ouni, R. [Inst. National Agronomique de Tunis (Tunisia); Chalbaoui, M. [Inst. Superieur des Etudes Technologiques de Gafsa (Tunisia); Chetouane, B. [Ecole des Mines d' Ales (France)

2005-07-01

The El Houareb dam in central Tunisia was constructed to contain Merguellil Wadi Floods. This paper addressed the issue of using probabilistic approaches to geotechnical engineering problems. Probabilistic techniques offer the only systematic way of treating and reducing uncertainty within the design process. A reliability analysis was performed to evaluate the probability of failure of the dam. The analysis was evaluated using a conventional 3-dimensional limit equilibrium analysis. A simplified reliability associated with particular failure mechanism was considered. The spatial variability of soil properties was represented by a random field. The study also addressed the correlation of mechanical and physical properties that exist between two points in space within a dam. It was noted that risk assessment of embankment dams must address the consequence of a dam's slope failure as well as the probability of failure. It was also noted that reliability analysis plays a major role in considering the uncertainties influencing the design of earth structures and allows the geotechnical engineer to quantify the effect of various failure preventive measures on these structures. Reliability assessment methods are being used more frequently to develop rigorous risk-management programs that consider all sources of uncertainty. 13 refs., 3 tabs., 3 figs.

17. Human reliability analysis. Performance shaping factors of stress agents and internal factors

International Nuclear Information System (INIS)

This document is part of a study regarding Human Reliability Analysis (HRA) for Nuclear Power Plants, evaluating effects of human error on availability of engineered safety features and systems. The purpose of the document was to introduce the concepts of stress and the relationship between performance and stress level. The more important characteristics of people behavior in terms of HRA is discussed

18. A Proposed New "What if Reliability" Analysis for Assessing the Statistical Significance of Bivariate Relationships

Science.gov (United States)

Onwuegbuzie, Anthony J.; Roberts, J. Kyle; Daniel, Larry G.

2005-01-01

In this article, the authors (a) illustrate how displaying disattenuated correlation coefficients alongside their unadjusted counterparts will allow researchers to assess the impact of unreliability on bivariate relationships and (b) demonstrate how a proposed new "what if reliability" analysis can complement null hypothesis significance tests of…

19. Development of web-based reliability data analysis algorithm model and its application

Energy Technology Data Exchange (ETDEWEB)

Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

2010-02-15

For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

20. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

Science.gov (United States)

Fagundo, Arturo

1994-01-01

Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

1. Development of web-based reliability data analysis algorithm model and its application

International Nuclear Information System (INIS)

For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

2. Performance of nuclear power plants and analysis of some factors affecting their operational reliability and economy

International Nuclear Information System (INIS)

In Czechoslovakia, there are eight WWER 440 type reactors in operation at present. Since their introduction into operation, nuclear power plants in Czechoslovakia have exhibited high reliability. In the paper, total parameters of reliability with an analysis of causes affecting negatively their annual utilization are presented. Existence of a computerized information system for acquisition, recording and evaluation of reliability-significant data from operation and its feedback to designers and manufacturers of nuclear power plant equipment and components is a basic requirement of a systematic assurance of the needed level of nuclear power plant reliability. The information system is used simultaneously also for realistic evaluation of aging of equipment and systems. Analysis of the state of equipment is important mainly in the final stage of the NPP during consideration of further extension of its service life. Environmental effects of the Czechoslovak NPPs are very low (favourable). It follows from comparison of annual dose equivalents of the Czechoslovak NPPs operational personnel with the foreign NPPs that the values recorded in Czechoslovakia belong to the lowest ones. In conclusion, some ways of assurance of operational safety and reliability of the Czechoslovak nuclear power plants including the role of the State regulatory body are briefly discussed. (author). 3 tabs

3. Comparative analysis of plant lycopene cyclases.

Science.gov (United States)

Koc, Ibrahim; Filiz, Ertugrul; Tombuloglu, Huseyin

2015-10-01

Carotenoids are essential isoprenoid pigments produced by plants, algae, fungi and bacteria. Lycopene cyclase (LYC) commonly cyclize carotenoids, which is an important branching step in the carotenogenesis, at one or both end of the backbone. Plants have two types of LYC (?-LCY and ?-LCY). In this study, plant LYCs were analyzed. Based on domain analysis, all LYCs accommodate lycopene cyclase domain (Pf05834). Furthermore, motif analysis indicated that motifs were conserved among the plants. On the basis of phylogenetic analysis, ?-LCYs and ?-LCYs were classified in ? and ? groups. Monocot and dicot plants separated from each other in the phylogenetic tree. Subsequently, Oryza sativa Japonica Group and Zea mays of LYCs as monocot plants and Vitis vinifera and Solanum lycopersicum of LYCs as dicot plants were analyzed. According to nucleotide diversity analysis of ?-LCY and ?-LCY genes, nucleotide diversities were found to be ?: 0.30 and ?: 0.25, respectively. The result highlighted ?-LCY genes showed higher nucleotide diversity than ?-LCY genes. LYCs interacting genes and their co-expression partners were also predicted using String server. The obtained data suggested the importance of LYCs in carotenoid metabolism. 3D modeling revealed that depicted structures were similar in O. sativa, Z mays, S. lycopersicum, and V. vinifera ?-LCYs and ?-LCYs. Likewise, the predicted binding sites were highly similar between O. sativa, Z mays, S. lycopersicum, and V. vinifera LCYs. Most importantly, analysis elucidated the V/IXGXGXXGXXXA motif for both type of LYC (?-LCY and ?-LCY). This motif related to Rossmann fold domain and probably provides a flat platform for binding of FAD in O. sativa, Z mays, S. lycopersicum, and V. vinifera ?-LCYs and ?-LCYs with conserved structure. In addition to lycopene cyclase domain, the V/IXGXGXXGXXXA motif can be used for exploring LYCs proteins and to annotate the function of unknown proteins containing lycopene cyclase domain. Overall results indicated that a high degree of conserved signature were observed in plant LYCs. PMID:26092704

4. Industrialization Lessons from BRICS: A Comparative Analysis

OpenAIRE

Naudé, Wim A.; Szirmai, Adam; Lavopa, Alejandro

2013-01-01

To date there has been few systematic and comparative empirical analyses of the nature of economic development in Brazil, Russia, India, China and South Africa (BRICS). We contribute to addressing this gap by exploring the patterns of structural change between 1980 and 2010, focusing on the manufacturing sector. We show that three of the BRICS are experiencing de-industrialization (Brazil, Russia and South Africa). China is the only country where an expanding manufacturing sector accounts for...

5. Comparative Transcriptome Analysis of Four Prymnesiophyte Algae

OpenAIRE

Koid, Amy E.; Liu, Zhenfeng; Terrado, Ramon; Jones, Adriane C.; Caron, David A.; Heidelberg, Karla B.

2014-01-01

Genomic studies of bacteria, archaea and viruses have provided insights into the microbial world by unveiling potential functional capabilities and molecular pathways. However, the rate of discovery has been slower among microbial eukaryotes, whose genomes are larger and more complex. Transcriptomic approaches provide a cost-effective alternative for examining genetic potential and physiological responses of microbial eukaryotes to environmental stimuli. In this study, we generated and compar...

6. Comparing Electoral Systems: A Geometric Analysis

OpenAIRE

Riviere, Anouk

2003-01-01

This paper constructs a game-theoretic model of elections in alternative electoral systems with three or four candidates. Each electoral system specifies how the platforms of the candidates and their scores give rise to an outcome. When geometrical analysis shows that two outcomes can compete against each other for victory, a pivot probability is associated to that pair. Each voter is rational and picks the candidate that maximizes her expected utility, which results from the balancing of her...

7. Comparative analysis of plant oil based fuels

Energy Technology Data Exchange (ETDEWEB)

Ziejewski, M.; Goettler, H.J.; Haines, H.; Huong, C.

1995-12-31

This paper presents the evaluation results from the analysis of different blends of fuels using the 13-mode standard SAE testing method. Six high oleic safflower oil blends, six ester blends, six high oleic sunflower oil blends, and six sunflower oil blends were used in this portion of the investigation. Additionally, the results from the repeated 13-mode tests for all the 25/75% mixtures with a complete diesel fuel test before and after each alternative fuel are presented.

8. Reliable and Efficient Procedure for Steady-State Analysis of Nonautonomous and Autonomous Systems

Directory of Open Access Journals (Sweden)

J. Dobes

2012-04-01

Full Text Available The majority of contemporary design tools do not still contain steady-state algorithms, especially for the autonomous systems. This is mainly caused by insufficient accuracy of the algorithm for numerical integration, but also by unreliable steady-state algorithms themselves. Therefore, in the paper, a very stable and efficient procedure for the numerical integration of nonlinear differential-algebraic systems is defined first. Afterwards, two improved methods are defined for finding the steady state, which use this integration algorithm in their iteration loops. The first is based on the idea of extrapolation, and the second utilizes nonstandard time-domain sensitivity analysis. The two steady-state algorithms are compared by analyses of a rectifier and a C-class amplifier, and the extrapolation algorithm is primarily selected as a more reliable alternative. Finally, the method based on the extrapolation naturally cooperating with the algorithm for solving the differential-algebraic systems is thoroughly tested on various electronic circuits: Van der Pol and Colpitts oscillators, fragment of a large bipolar logical circuit, feedback and distributed microwave oscillators, and power amplifier. The results confirm that the extrapolation method is faster than a classical plain numerical integration, especially for larger circuits with complicated transients.

9. A virtual experimental technique for data collection for a Bayesian network approach to human reliability analysis

International Nuclear Information System (INIS)

Bayesian network (BN) is a powerful tool for human reliability analysis (HRA) as it can characterize the dependency among different human performance shaping factors (PSFs) and associated actions. It can also quantify the importance of different PSFs that may cause a human error. Data required to fully quantify BN for HRA in offshore emergency situations are not readily available. For many situations, there is little or no appropriate data. This presents significant challenges to assign the prior and conditional probabilities that are required by the BN approach. To handle the data scarcity problem, this paper presents a data collection methodology using a virtual environment for a simplified BN model of offshore emergency evacuation. A two-level, three-factor experiment is used to collect human performance data under different mustering conditions. Collected data are integrated in the BN model and results are compared with a previous study. The work demonstrates that the BN model can assess the human failure likelihood effectively. Besides, the BN model provides the opportunities to incorporate new evidence and handle complex interactions among PSFs and associated actions

10. Reliability analysis of stainless steel piping using a single stress corrosion cracking damage parameter

International Nuclear Information System (INIS)

This article presents the results of an investigation that combines standard methods of fracture mechanics, empirical correlations of stress-corrosion cracking, and probabilistic methods to provide an assessment of Intergranular Stress Corrosion Cracking (IGSCC) of stainless steel piping. This is done by simulating the cracking of stainless steel piping under IGSCC conditions using the general methodology recommended in the modified computer program Piping Reliability Analysis Including Seismic Events, and by characterizing IGSCC using a single damage parameter. Good correlation between the pipe end-life probability of leak and the damage values were found. These correlations were later used to generalize this probabilistic fracture model. Also, the probability of detection curves and the benefits of in-service inspection in order to reduce the probability of leak for nuclear piping systems subjected to IGSCC were discussed for several pipe sizes. It was found that greater benefits could be gained from inspections for the large pipe as compared to the small pipe sizes. Also, the results indicate that the use of a better inspection procedure can be more effective than a tenfold increase in the number of inspections of inferior quality. -- Highlights: • We simulate the pipe probability of failure under different level of SCC damages. • The residual stresses are adjusted to calibrate the model. • Good correlations between 40-year cumulative leak probabilities and D? are found. • These correlations were used to generalize this probabilistic fracture model. • We assess the effect of inspection procedures and scenarios on leak probabilities

11. System reliability assessment via sensitivity analysis in the Markov chain scheme

International Nuclear Information System (INIS)

Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given

12. STATISTICAL ANALYSIS OF IMPLANT ANGLES EFFECTS ON ASYMMETRICAL NMOSFETs CHARACTERISTICS AND RELIABILITY

OpenAIRE

Dars, P.; Ternisien D'Ouville, T.; Mingam, H.; Merckel, G.

1988-01-01

Statistical analysis of asymmetry in LDD NMOSFETs electrical characteristics shows the influence of implantation angles on non-overlap variation observed on devices realized on a 100 mm wafer and within the wafers of a batch . The study of the consequence of this dispersion on the aging behaviour illustrates the importance of this parameter for reliability and the necessity to take it in account for accurate analysis of stress results.

13. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

Energy Technology Data Exchange (ETDEWEB)

Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

2006-10-01

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

14. Reliability and lifetime analysis of the control and protection system at the Bilibino NPP

International Nuclear Information System (INIS)

Nuclear Power Installations in Russia that have already been working for more than 20 years are periodically investigated (every two years) for a deep analysis of physical and technical processes and metals of an installation, testing cables and other electrical equipment, proper preventive maintenance, reliability and risk analysis of safety-related systems. The importance of specific failure data together with adequate and faithful mathematical models is obvious, the data are the basis of the analyses and can be useful for preventive maintenance and other purposes. This paper briefly describes the situation around reliability data collection for Russian NPPs, but the main subject of the paper is to describe a procedure of reliability analysis that was applied to the Control and Protection System (CPS) at one of the old stations (Bilibino NPP, in the far east of Russia), and to demonstrate the procedure by real estimations made on operational specific failure data. The Bilibino NPP consists of four units with channel-type and graphite moderated reactors operating since 1973. This specific type of reactors is used only at this NPP and nowhere else in Russia. The reliability analysis made implies estimating such characteristics of components as the probability of occurrence of failure, probability density functions, and such characteristics of systems as the probability of failure on demand, availability, mean and gamma-percentage residual lifetime. The analysis includes also constructing reliability diagrams, confidential intervals, choosing laws of distributions and some other issues. Some practical techniques helping to apply theoretical models are shown. All calculated results including source failure data are shown for some of the CPS's components and subsystem

15. Comparative Analysis of GOCI Ocean Color Products

Directory of Open Access Journals (Sweden)

Ruhul Amin

2015-10-01

Full Text Available The Geostationary Ocean Color Imager (GOCI is the first geostationary ocean color sensor in orbit that provides bio-optical properties from coastal and open waters around the Korean Peninsula at unprecedented temporal resolution. In this study, we compare the normalized water-leaving radiance (nLw products generated by the Naval Research Laboratory Automated Processing System (APS with those produced by the stand-alone software package, the GOCI Data Processing System (GDPS, developed by the Korean Ocean Research & Development Institute (KORDI. Both results are then compared to the nLw measured by the above water radiometer at the Ieodo site. This above-water radiometer is part of the Aerosol Robotic NETwork (AeroNET. The results indicate that the APS and GDPS processed  correlates well within the same image slot where the coefficient of determination (r2 is higher than 0.84 for all the bands from 412 nm to 745 nm. The agreement between APS and the AeroNET data is higher when compared to the GDPS results. The Root-Mean-Squared-Error (RMSE between AeroNET and APS data ranges from 0.24 [mW/(cm2sr?m] at 555 nm to 0.52 [mW/(cm2sr?m]  at 412 nm while RMSE between AeroNET and GDPS data ranges from 0.47 [mW/(cm2sr?m] at 443 nm to 0.69 [mW/(cm2sr?m]  at 490 nm.

16. Comparative analysis of some search engines

OpenAIRE

Taiwo O. Edosomwan; Joseph Edosomwan

2010-01-01

We compared the information retrieval performances of some popular search engines (namely, Google, Yahoo, AlltheWeb, Gigablast, Zworks and AltaVista and Bing/MSN) in response to a list of ten queries, varying in complexity. These queries were run on each search engine and the precision and response time of the retrieved results were recorded. The first ten documents on each retrieval output were evaluated as being ‘relevant’ or ‘non-relevant’ for evaluation of the search engine’s precision. T...

17. Design and analysis of reliable and fault-tolerant computer systems

CERN Document Server

Abd-El-Barr, Mostafa

2006-01-01

Covering both the theoretical and practical aspects of fault-tolerant mobile systems, and fault tolerance and analysis, this book tackles the current issues of reliability-based optimization of computer networks, fault-tolerant mobile systems, and fault tolerance and reliability of high speed and hierarchical networks.The book is divided into six parts to facilitate coverage of the material by course instructors and computer systems professionals. The sequence of chapters in each part ensures the gradual coverage of issues from the basics to the most recent developments. A useful set of refere

18. An analysis to determine industry's preferred option for an initial generic reliability database for CANDU

International Nuclear Information System (INIS)

The paper presents a multi-attribute analysis approach that was used to choose the industry's preferred option in developing a generic reliability database for CANDU. The Risk and Reliability Working Group (sponsored by CANDU Owners Group - Nuclear Safety Committee - COG NSC) was faced with the decision to assess in depth the following four options: a) Use External Data Base Only; b) Full CANDU generic database (All Utilities); c) CANDU Specific plus External Generic Databases; d) Ontario Power Generation (OPG)/ Bruce Power (BP) Experience Only. Nine decision criteria were utilized to rank the proposed options (alternatives). The Analytic Hierarchy Process (AHP) was used in carrying out the ranking process. (author)

19. A comparative analysis of capacity adequacy policies

International Nuclear Information System (INIS)

In this paper a stochastic dynamic optimization model is used to analyze the effect of different generation adequacy policies in restructured power systems. The expansion decisions of profit-maximizing investors are simulated under a number of different market designs: Energy Only with and without a price cap, Capacity Payment, Capacity Obligation, Capacity Subscription, and Demand Elasticity. The results show that the overall social welfare is reduced compared to a centralized social welfare optimization for all policies except Capacity Subscription and Demand Elasticity. In particular, an energy only market with a low price cap leads to a significant increase in involuntary load shedding. Capacity payments and obligations give additional investment incentives and more generating capacity, but also result in a considerable transfer of wealth from consumers to producers due to the capacity payments. Increased demand elasticity increases social welfare, but also results in a transfer from producers to consumers, compared to the theoretical social welfare optimum. In contrast, the capacity subscription policy increases the social welfare, and both producers and consumers benefit. This is possible because capacity subscription explicitly utilizes differences in consumers' preferences for uninterrupted supply. This advantage must be weighed against the cost of implementation, which is not included in the model.

20. Comparative analysis of life insurance market

Directory of Open Access Journals (Sweden)

Malynych, Anna Mykolayivna

2011-05-01

Full Text Available The article deals with the comprehensive analysis of statistic insight into development of the world and regional life insurance markets on the basis of macroeconomic indicators. The author located domestic life insurance market on the global scale, analyzed its development and suggested the methods to calculate the marketing life insurance index. There was also approbated the mentioned methods on database of 77 countries all over the world. The author also defined the national rating on the basis of marketing life insurance index.

1. Passive system reliability analysis using Response Conditioning Method with an application to failure frequency estimation of Decay Heat Removal of PFBR

International Nuclear Information System (INIS)

Highlights: ? This paper presents a computationally efficient and consistent methodology to evaluate reliability of passive nuclear safety systems. ? The methodologies are experimented with passive Decay Heat Removal System of Prototype Fast Breeder Reactor (PFBR). ? Inclusion of passive system reliability in Probabilistic Safety Assessment is demonstrated. ? Failure frequencies are evaluated with respect to core damage as well as operational safety. - Abstract: Innovative nuclear reactor designs include passive means to achieve high reliability in accomplishing safety functions. Functional reliability analyses of passive systems include Monte Carlo sampling of system uncertainties, followed by propagation through mechanistic system models. For complex passive safety systems of high reliability, Monte Carlo simulations using mechanistic codes are computationally expensive and often become prohibitive. Passive system reliability analysis using recently proposed Response Conditioning Method, which incorporates the insights obtained from approximate solutions like response surfaces in simulations to obtain computationally efficient and consistent probability estimates, is presented in this paper. The method is applied to evaluate the reliability of passive Decay Heat Removal (DHR) system of Indian Prototype Fast Breeder Reactor (PFBR). The accuracy as well as efficiency of the method is compared with direct Monte Carlo simulation. The variability of the reliability values is estimated using bootstrap technique. The system abilities, to prevent critical structural damage as well as to ensure operational safety, are quantitatively ascertained. The system functional failure probabilities are integrated with hardware failure probabilities and the inclusion of passive system unreliability in Probabilistic Safety Assessment is demonstrated.

2. The design and use of reliability data base with analysis tool

International Nuclear Information System (INIS)

With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst's current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs

Scientific Electronic Library Online (English)

ENRIQUE RAÚL, VILLA-DIHARCE; PEDRO ENRIQUE, MONJARDIN.

2011-06-15

Full Text Available En la industria se requieren estudios de confiabilidad para asegurar el nivel de calidad de los productos que se fabrican. Algunas veces tales productos tienen varios modos de falla que deben considerarse en el análisis de confiabilidad. Este análisis se complica cuando se incluye el patrón de depen [...] dencia de los modos de falla. En este artículo se muestra el análisis de confiabilidad de componentes que presentan dos modos de falla dependientes, expresando el patrón de dependencia de ambos modos de falla por medio de una función de cópula. Esta representación es adecuada, debido a que las distribuciones de los tiempos de falla son diferentes, Lognormal y Weibull. Se presenta un ejemplo donde se ilustra el análisis de confiabilidad de componentes con dos modos de falla. Abstract in english In industry, reliability studies are done to assure the quality level of products manufactured. Sometimes these products have several failure modes that must be considered in reliability analysis. This analysis is complicated when the dependence pattern is included. In this article a reliability ana [...] lysis of components that have two dependent failure modes is proposed. The dependence pattern is expressed through a copula function, this representation is appropriate, because the marginal distributions of failure times are from different families, Lognormal and Weibull. We provide an example which illustrates the reliability analysis of components with two failure modes.

4. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

Science.gov (United States)

Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

2003-01-01

This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

5. A model for reliability analysis and calculation applied in an example from chemical industry

Directory of Open Access Journals (Sweden)

Pejovi? Branko B.

2010-01-01

Full Text Available The subject of the paper is reliability design in polymerization processes that occur in reactors of a chemical industry. The designed model is used to determine the characteristics and indicators of reliability, which enabled the determination of basic factors that result in a poor development of a process. This would reduce the anticipated losses through the ability to control them, as well as enabling the improvement of the quality of production, which is the major goal of the paper. The reliability analysis and calculation uses the deductive method based on designing of a scheme for fault tree analysis of a system based on inductive conclusions. It involves the use standard logical symbols and rules of Boolean algebra and mathematical logic. The paper eventually gives the results of the work in the form of quantitative and qualitative reliability analysis of the observed process, which served to obtain complete information on the probability of top event in the process, as well as objective decision making and alternative solutions.

6. The Constant Comparative Method of Qualitative Analysis

Directory of Open Access Journals (Sweden)

Barney G. Glaser, Ph.D.

2008-11-01

Full Text Available Currently, the general approaches to the analysis of qualitative data are these:1. If the analyst wishes to convert qualitative data into crudely quantifiable form so that he can provisionally test a hypothesis, he codes the data first and then analyzes it. He makes an effort to code “all relevant data [that] can be brought to bear on a point,” and then systematically assembles, assesses and analyzes these data in a fashion that will “constitute proof for a given proposition.”i2. If the analyst wishes only to generate theoretical ideasnew categories and their properties, hypotheses and interrelated hypotheses- he cannot be confined to the practice of coding first and then analyzing the data since, in generating theory, he is constantly redesigning and reintegrating his theoretical notions as he reviews his material.ii Analysis with his purpose, but the explicit coding itself often seems an unnecessary, burdensome task. As a result, the analyst merely inspects his data for new properties of his theoretical categories, and writes memos on these properties.We wish to suggest a third approach

7. Comparative chromatin analysis of Trypanosoma congolense

Scientific Electronic Library Online (English)

Wolfram, Schlimme; Markus, Burri; Bruno, Betschart; Hermann, Hecker.

1994-06-01

Full Text Available The chromatin of Trypanosoma congolense was analyzed by electron microscopy. The chromatin is organized as nucleosome filaments but does not form a 30 nm fiber. There are five groups of histones, including a histone H1-like protein, which has a molecular weight within the range of the core histones, [...] and is extremely hydrophilic. Weak histone-histone interaction, a typical feature of trypanosoma chromatin, was found. These results are similar to those for T. cruzi and T. b. brucei, but differ significantly from those for higher eukaryotes. The results confirm the model of trypanosome chromatin, and support the theory of their early separation from the other eukaryotes during the evolution. T. congolensis is an excellent model for chromatin research on trypanosomes, because it is easy to cultivate and its chromatin has, a relatively high stability, compared to that of other trypanosomes.

8. Resilience and electricity systems: A comparative analysis

International Nuclear Information System (INIS)

Electricity systems have generally evolved based on the natural resources available locally. Few metrics exist to compare the security of electricity supply of different countries despite the increasing likelihood of potential shocks to the power system like energy price increases and carbon price regulation. This paper seeks to calculate a robust measure of national power system resilience by analysing each step in the process of transformation from raw energy to consumed electricity. Countries with sizeable deposits of mineral resources are used for comparison because of the need for electricity-intensive metals processing. We find that shifts in electricity-intensive industry can be predicted based on countries' power system resilience. - Highlights: ? We establish a resilience index measure for major electricity systems. ? We examine a range of OECD and developing nations electricity systems and their ability to cope with shocks. ? Robustness measures are established to show resilience of electricity systems.

9. Comparative analysis of some search engines

Directory of Open Access Journals (Sweden)

Taiwo O. Edosomwan

2010-10-01

Full Text Available We compared the information retrieval performances of some popular search engines (namely, Google, Yahoo, AlltheWeb, Gigablast, Zworks and AltaVista and Bing/MSN in response to a list of ten queries, varying in complexity. These queries were run on each search engine and the precision and response time of the retrieved results were recorded. The first ten documents on each retrieval output were evaluated as being ‘relevant’ or ‘non-relevant’ for evaluation of the search engine’s precision. To evaluate response time, normalised recall ratios were calculated at various cut-off points for each query and search engine. This study shows that Google appears to be the best search engine in terms of both average precision (70% and average response time (2 s. Gigablast and AlltheWeb performed the worst overall in this study.

10. Nonlinear analysis of RED - a comparative study

International Nuclear Information System (INIS)

Random Early Detection (RED) is an active queue management (AQM) mechanism for routers on the Internet. In this paper, performance of RED and Adaptive RED are compared from the viewpoint of nonlinear dynamics. In particular, we reveal the relationship between the performance of the network and its nonlinear dynamical behavior. We measure the maximal Lyapunov exponent and Hurst parameter of the average queue length of RED and Adaptive RED, as well as the throughput and packet loss rate of the aggregate traffic on the bottleneck link. Our simulation scenarios include FTP flows and Web flows, one-way and two-way traffic. In most situations, Adaptive RED has smaller maximal Lyapunov exponents, lower Hurst parameters, higher throughput and lower packet loss rate than that of RED. This confirms that Adaptive RED has better performance than RED

11. Comparative analysis of some search engines

Scientific Electronic Library Online (English)

J, Edosomwan; TO, Edosomwan.

2010-12-01

Full Text Available We compared the information retrieval performances of some popular search engines (namely, Google, Yahoo, AlltheWeb, Gigablast, Zworks and AltaVista and Bing/MSN) in response to a list of ten queries, varying in complexity. These queries were run on each search engine and the precision and response [...] time of the retrieved results were recorded. The first ten documents on each retrieval output were evaluated as being 'relevant' or 'non-relevant' for evaluation of the search engine's precision. To evaluate response time, normalised recall ratios were calculated at various cut-off points for each query and search engine. This study shows that Google appears to be the best search engine in terms of both average precision (70%) and average response time (2 s). Gigablast and AlltheWeb performed the worst overall in this study.

12. The distal radius and ulna classification in assessing skeletal maturity: a simplified scheme and reliability analysis.

Science.gov (United States)

Cheung, Jason Pui Yin; Samartzis, Dino; Cheung, Prudence Wing Hang; Leung, Ka Hei; Cheung, Kenneth Man Chee; Luk, Keith Dip-Kei

2015-11-01

The aim of this study is to simplify the distal radius and ulna classification for practical use and to test its reliability. This was a prospective single-center study of patients with juvenile and adolescent idiopathic scoliosis. Left-hand radiographs were retrieved for measurements by three examiners. Reliability analysis was carried out by intraclass correlation. 34 females and 16 males were recruited, mean age 12.7 years (±SD 1.7). The grades varied from R4-R11 to U1-U9. There was strong to near-perfect intraclass correlation. This study concludes that the simplified distal radius and ulna classification is a reliable tool for assessment of skeletal maturity. PMID:26196369

13. Reliability Omnipotent Analysis For First Stage Separator On The Separation Process Of Gas, Oil And Water

International Nuclear Information System (INIS)

Reliability of industry can be evaluated based on two aspects which are risk and economic aspects. From these points, optimation value can be determined optimation value. Risk of the oil refinery process are fire and explosion, so assessment of this system must be done. One system of the oil refinery process is first stage separator which is used to separate gas, oil and water. Evaluation of reliability for first stage separator system has been done with FAMECA and HAZap method. The analysis results, the probability of fire and explosion of 1.1x10-23/hour and 1.2x10-11/hour, respectively. The reliability value of the system is high because each undesired event is anticipated with safety system or safety component

14. Problems and chances for probabilistic fracture mechanics in the analysis of steel pressure boundary reliability

International Nuclear Information System (INIS)

It is shown that the difficulty for probabilistic fracture mechanics (PFM) is the general problem of the high reliability of a small population. There is no way around the problem as yet. Therefore what PFM can contribute to the reliability of steel pressure boundaries is demonstrated with the example of a typical reactor pressure vessel and critically discussed. Although no method is distinguishable that could give exact failure probabilities, PFM has several additional chances. Upper limits for failure probability may be obtained together with trends for design and operating conditions. Further, PFM can identify the most sensitive parameters, improved control of which would increase reliability. Thus PFM should play a vital role in the analysis of steel pressure boundaries despite all shortcomings. (author). 19 refs, 7 figs, 1 tab

15. Reliability analysis of safety grade decay heat removal system of Indian prototype fast breeder reactor

International Nuclear Information System (INIS)

The 500 MW Indian pool type Prototype Fast Breeder Reactor (PFBR), is provided with two independent and diverse Decay Heat Removal (DHR) systems viz., Operating Grade Decay Heat Removal System (OGDHRS) and Safety Grade Decay Heat Removal System (SGDHRS). OGDHRS utilizes the secondary sodium loops and Steam-Water System with special decay heat removal condensers for DHR function. The unreliability of this system is of the order of 0.1-0.01. The safety requirements of the present generation of fast reactors are very high, and specifically for DHR function the failure frequency should be less than ?1E-7/ry. Therefore, a passive SGDHR system using four completely independent thermo-siphon loops in natural convection mode is provided to ensure adequate core cooling for all Design Basis Events. The very high reliability requirement for DHR function is achieved mainly with the help of SGDHRS. This paper presents the reliability analysis of SGDHR system. Analysis is performed by Fault Tree method using 'CRAFT' software developed at Indira Gandhi Centre for Atomic Research. This software has special features for compact representation and CCF analysis of high redundancy safety systems encountered in nuclear reactors. Common Cause Failures (CCF) are evaluated by ? factor method. The reliability target for SGDHRS arrived from DHR reliability requirement and the ultimate number of demands per year (7/y) on SGDHRS is that the failure frequency should be ?1.4E-8/de. Since it is found from the analysis that the unreliability of SGDHRS with identical loops is 5.2E-6/de and dominated by leak rates of components like AHX, DHX and sodium dump and isolation valves, options with diversity measures in important components were studied. The failure probability of SGDHRS for a design consisting of 2 types of diverse loops (Diverse AHX, DHX and sodium dump and isolation valves) is 2.1E-8/de, which practically meets the reliability requirement

16. Comparative Modal Analysis of Sieve Hardware Designs

Science.gov (United States)

Thompson, Nathaniel

2012-01-01

The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

17. Comparative Analysis Of Cloud Computing Security Issues

Directory of Open Access Journals (Sweden)

AKRAM MUJAHID

2014-01-01

Full Text Available Almost all the organizations are seriously thinking to adopt the cloud computingservices, seeing its benefits in terms of cost, accessibility, availability, flexibility andhighly automated process of updation. Cloud Computing enhance the current capabilitiesdynamically without further investment. Cloud Computing is a band of resources, applicationsand services. In cloud computing customer’s access IT related services in terms of infrastructure platform and software without getting knowledge of underlying technologies. With the executionof cloud computing, organizations have strong concerns about the security of their data.Organizations are hesitating to take initiatives in the deployment of their businesses due to data security problem. This paper gives an overview of cloud computing and analysis of security issues in cloud computing.

18. Comparative Analysis of Cystatin Superfamily in Platyhelminths

Science.gov (United States)

Guo, Aijiang

2015-01-01

The cystatin superfamily is comprised of cysteine proteinase inhibitors and encompasses at least 3 subfamilies: stefins, cystatins and kininogens. In this study, the platyhelminth cystatin superfamily was identified and grouped into stefin and cystatin subfamilies. The conserved domain of stefins (G, QxVxG) was observed in all members of platyhelminth stefins. The three characteristics of cystatins, the cystatin-like domain (G, QxVxG, PW), a signal peptide, and one or two conserved disulfide bonds, were observed in platyhelminths, with the exception of cestodes, which lacked the conserved disulfide bond. However, it is noteworthy that cestode cystatins had two tandem repeated domains, although the second tandem repeated domain did not contain a cystatin-like domain, which has not been previously reported. Tertiary structure analysis of Taenia solium cystatin, one of the cestode cystatins, demonstrated that the N-terminus of T. solium cystatin formed a five turn ?-helix, a five stranded ?-pleated sheet and a hydrophobic edge, similar to the structure of chicken cystatin. Although no conserved disulfide bond was found in T. solium cystatin, the models of T. solium cystatin and chicken cystatin corresponded at the site of the first disulfide bridge of the chicken cystatin. However, the two models were not similar regarding the location of the second disulfide bridge of chicken cystatin. These results showed that T. solium cystatin and chicken cystatin had similarities and differences, suggesting that the biochemistry of T. solium cystatin could be similar to chicken cystatin in its inhibitory function and that it may have further functional roles. The same results were obtained for other cestode cystatins. Phylogenetic analysis showed that cestode cystatins constituted an independent clade and implied that cestode cystatins should be considered to have formed a new clade during evolution. PMID:25853513

19. Comparative analysis of harmonized forest area stimates for European countries

DEFF Research Database (Denmark)

Seebach, Lucia Maria; Strobl, P.; Miguel-Ayanz, J. San; Gallego, J.; Bastrup-Birk, Annemarie

2011-01-01

Harmonized forest area information provides an important basis for environmental modelling and policy-making at both national and international levels. Traditionally, this information has been provided by national forest inventory statistics but is now increasingly complemented with remote sensing tools. Reliability and harmonization of both sources are important aspects to ensure comparability and to enable the development of international forest scenarios. Initiatives with the purpose of harmo...

20. Development and application for quantification of cognitive reliability and error analysis method

International Nuclear Information System (INIS)

Cognitive reliability and error analysis method (CREAM) is a representative method of the so-called second generation human reliability analysis (HRA) method and can be used in both retrospective and prospective analysis. Not only the general CREAM was descried,but also using the context effect factor ? and common performance condition (CPC) factor, the formula was obtained for calculation of cognitive error probability in probability safety assessment (PSA)/HRA of nuclear power plant and to provide a simplified CREAM quantification process. One example was provided in the steam generator tube rupture (SGTR) event of Qinshan I Nuclear Power Plant. The results give a more validated information for future application of PSA/HRA. (authors)