Sample records for reliability analysis results

  1. Investigation for Ensuring the Reliability of the MELCOR Analysis Results

    Sung, Joonyoung; Maeng, Yunhwan; Lee, Jaeyoung [Handong Global Univ., Pohang (Korea, Republic of)


    Flow rate could be also main factor to be proven because it is in charge of a role which takes thermal balance through heat transfer in inner side of fuel assembly. Some problems about a reliability of MELCOR results could be posed in the 2{sup nd} technical report of NSRC project. In order to confirm whether MELCOR results are dependable, experimental data of Sandia Fuel Project 1 phase were used to be compared to be a reference. In Spent Fuel Pool (SFP) severe accident, especially in case of boil-off, partial loss of coolant accident, and complete loss of coolant accident; heat source and flow rate could be main points to analyze the MELCOR results. Heat source might be composed as decay heat and oxidation heat. Because heat source makes it possible to lead a zirconium fire situation if it is satisfied that heat accumulates in spent fuel rod and then cladding temperature could be raised continuously to be generated an oxidation heat, this might be a main factor to be confirmed. This work was proposed to investigate reliability of MELCOR results in order to confirm physical phenomena if SFP severe accident is occurred. Almost results showed that MELCOR results were significantly different by minute change of main parameter in identical condition. Therefore it could be necessary that oxidation coefficients have to be chosen as value to delineate real phenomena as possible.


    Giovanni Francesco Spatola


    Full Text Available The use of image analysis methods has allowed us to obtain more reliable and repro-ducible immunohistochemistry (IHC results. Wider use of such approaches and sim-plification of software allowing a colorimetric study has meant that these methods are available to everyone, and made it possible to standardize the technique by a reliable systems score. Moreover, the recent introduction of multispectral image acquisition systems methods has further refined these techniques, minimizing artefacts and eas-ing the evaluation of the data by the observer.

  3. Power electronics reliability analysis.

    Smith, Mark A.; Atcitty, Stanley


    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  4. Ceramic material life prediction: A program to translate ANSYS results to CARES/LIFE reliability analysis

    Vonhermann, Pieter; Pintz, Adam


    This manual describes the use of the ANSCARES program to prepare a neutral file of FEM stress results taken from ANSYS Release 5.0, in the format needed by CARES/LIFE ceramics reliability program. It is intended for use by experienced users of ANSYS and CARES. Knowledge of compiling and linking FORTRAN programs is also required. Maximum use is made of existing routines (from other CARES interface programs and ANSYS routines) to extract the finite element results and prepare the neutral file for input to the reliability analysis. FORTRAN and machine language routines as described are used to read the ANSYS results file. Sub-element stresses are computed and written to a neutral file using FORTRAN subroutines which are nearly identical to those used in the NASCARES (MSC/NASTRAN to CARES) interface.

  5. Experimental results of fingerprint comparison validity and reliability: A review and critical analysis.

    Haber, Ralph Norman; Haber, Lyn


    Our purpose in this article is to determine whether the results of the published experiments on the accuracy and reliability of fingerprint comparison can be generalized to fingerprint laboratory casework, and/or to document the error rate of the Analysis-Comparison-Evaluation (ACE) method. We review the existing 13 published experiments on fingerprint comparison accuracy and reliability. These studies comprise the entire corpus of experimental research published on the accuracy of fingerprint comparisons since criminal courts first admitted forensic fingerprint evidence about 120years ago. We start with the two studies by Ulery, Hicklin, Buscaglia and Roberts (2011, 2012), because they are recent, large, designed specifically to provide estimates of the accuracy and reliability of fingerprint comparisons, and to respond to the criticisms cited in the National Academy of Sciences Report (2009). Following the two Ulery et al. studies, we review and evaluate the other eleven experiments, considering problems that are unique to each. We then evaluate the 13 experiments for the problems common to all or most of them, especially with respect to the generalizability of their results to laboratory casework. Overall, we conclude that the experimental designs employed deviated from casework procedures in critical ways that preclude generalization of the results to casework. The experiments asked examiner-subjects to carry out their comparisons using different responses from those employed in casework; the experiments presented the comparisons in formats that differed from casework; the experiments enlisted highly trained examiners as experimental subjects rather than subjects drawn randomly from among all fingerprint examiners; the experiments did not use fingerprint test items known to be comparable in type and especially in difficulty to those encountered in casework; and the experiments did not require examiners to use the ACE method, nor was that method defined

  6. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L. [Pacific Northwest Lab., Richland, WA (United States)


    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  7. Health search engine with e-document analysis for reliable search results.

    Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine


    After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (, free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.

  8. Demands placed on waste package performance testing and modeling by some general results on reliability analysis

    Chesnut, D.A.


    Waste packages for a US nuclear waste repository are required to provide reasonable assurance of maintaining substantially complete containment of radionuclides for 300 to 1000 years after closure. The waiting time to failure for complex failure processes affecting engineered or manufactured systems is often found to be an exponentially-distributed random variable. Assuming that this simple distribution can be used to describe the behavior of a hypothetical single barrier waste package, calculations presented in this paper show that the mean time to failure (the only parameter needed to completely specify an exponential distribution) would have to be more than 10{sub 7} years in order to provide reasonable assurance of meeting this requirement. With two independent barriers, each would need to have a mean time to failure of only 10{sup 5} years to provide the same reliability. Other examples illustrate how multiple barriers can provide a strategy for not only achieving but demonstrating regulatory compliance.

  9. Multidisciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  10. System Reliability Analysis: Foundations.


    performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash

  11. ATLAS reliability analysis

    Bartsch, R.R.


    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  12. Reliability analysis in intelligent machines

    Mcinroy, John E.; Saridis, George N.


    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  13. Rapid, reliable geodetic data analysis for hazard response: Results from the Advanced Rapid Imaging and Analysis (ARIA) project

    Owen, S. E.; Simons, M.; Hua, H.; Yun, S.; Cruz, J.; Webb, F.; Rosen, P. A.; Fielding, E. J.; Moore, A. W.; Polet, J.; Liu, Z.; Agram, P. S.; Lundgren, P.


    ARIA is a joint JPL/Caltech coordinated project to automate InSAR and GPS imaging capabilities for scientific understanding, hazard response, and societal benefit. Geodetic imaging's unique ability to capture surface deformation in high spatial and temporal resolution allows us to resolve the fault geometry and distribution of slip associated with earthquakes in high spatial & temporal detail. In certain cases, it can be complementary to seismic data, providing constraints on location, geometry, or magnitude that is difficult to determine with seismic data alone. In addition, remote sensing with SAR provides change detection and damage assessment capabilities for earthquakes, floods and other disasters that can image even at night or through clouds. We have built an end-to-end prototype geodetic imaging data system that forms the foundation for a hazard response and science analysis capability that integrates InSAR, high-rate GPS, seismology, and modeling to deliver monitoring, science, and situational awareness products. This prototype incorporates state-of-the-art InSAR and GPS analysis algorithms from technologists and scientists. The products have been designed and a feasibility study conducted in collaboration with USGS scientists in the earthquake and volcano science programs. We will present results that show the capabilities of this data system in terms of latency, data processing capacity, quality of automated products, and feasibility of use for analysis of large SAR and GPS data sets and for earthquake response activities.

  14. Reliability Analysis of Sensor Networks

    JIN Yan; YANG Xiao-zong; WANG Ling


    To Integrate the capacity of sensing, communication, computing, and actuating, one of the compelling technological advances of these years has been the appearance of distributed wireless sensor network (DSN) for information gathering tasks. In order to save the energy, multi-hop routing between the sensor nodes and the sink node is necessary because of limited resource. In addition, the unpredictable conditional factors make the sensor nodes unreliable. In this paper, the reliability of routing designed for sensor network and some dependability issues of DSN, such as MTTF(mean time to failure) and the probability of connectivity between the sensor nodes and the sink node are analyzed.Unfortunately, we could not obtain the accurate result for the arbitrary network topology, which is # P-hard problem.And the reliability analysis of restricted topologies clustering-based is given. The method proposed in this paper will show us a constructive idea about how to place energyconstrained sensor nodes in the network efficiently from the prospective of reliability.

  15. Reliability Analysis of Wind Turbines

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard


    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  16. Production Facility System Reliability Analysis Report

    Dale, Crystal Buchanan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This document describes the reliability, maintainability, and availability (RMA) modeling of the Los Alamos National Laboratory (LANL) design for the Closed Loop Helium Cooling System (CLHCS) planned for the NorthStar accelerator-based 99Mo production facility. The current analysis incorporates a conceptual helium recovery system, beam diagnostics, and prototype control system into the reliability analysis. The results from the 1000 hr blower test are addressed.

  17. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  18. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  19. Hybrid reliability model for fatigue reliability analysis of steel bridges

    曹珊珊; 雷俊卿


    A kind of hybrid reliability model is presented to solve the fatigue reliability problems of steel bridges. The cumulative damage model is one kind of the models used in fatigue reliability analysis. The parameter characteristics of the model can be described as probabilistic and interval. The two-stage hybrid reliability model is given with a theoretical foundation and a solving algorithm to solve the hybrid reliability problems. The theoretical foundation is established by the consistency relationships of interval reliability model and probability reliability model with normally distributed variables in theory. The solving process is combined with the definition of interval reliability index and the probabilistic algorithm. With the consideration of the parameter characteristics of theS−N curve, the cumulative damage model with hybrid variables is given based on the standards from different countries. Lastly, a case of steel structure in the Neville Island Bridge is analyzed to verify the applicability of the hybrid reliability model in fatigue reliability analysis based on the AASHTO.

  20. Sensitivity Analysis of Component Reliability



    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  1. Reliability Analysis of High Rockfill Dam Stability

    Ping Yi


    Full Text Available A program 3DSTAB combining slope stability analysis and reliability analysis is developed and validated. In this program, the limit equilibrium method is utilized to calculate safety factors of critical slip surfaces. The first-order reliability method is used to compute reliability indexes corresponding to critical probabilistic surfaces. When derivatives of the performance function are calculated by finite difference method, the previous iteration’s critical slip surface is saved and used. This sequential approximation strategy notably improves efficiency. Using this program, the stability reliability analyses of concrete faced rockfill dams and earth core rockfill dams with different heights and different slope ratios are performed. The results show that both safety factors and reliability indexes decrease as the dam’s slope increases at a constant height and as the dam’s height increases at a constant slope. They decrease dramatically as the dam height increases from 100 m to 200 m while they decrease slowly once the dam height exceeds 250 m, which deserves attention. Additionally, both safety factors and reliability indexes of the upstream slope of earth core rockfill dams are higher than that of the downstream slope. Thus, the downstream slope stability is the key failure mode for earth core rockfill dams.

  2. Culture Representation in Human Reliability Analysis

    David Gertman; Julie Marble; Steven Novack


    Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991) cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.

  3. Exon Array Analysis using re-defined probe sets results in reliable identification of alternatively spliced genes in non-small cell lung cancer

    Gröne Jörn


    Full Text Available Abstract Background Treatment of non-small cell lung cancer with novel targeted therapies is a major unmet clinical need. Alternative splicing is a mechanism which generates diverse protein products and is of functional relevance in cancer. Results In this study, a genome-wide analysis of the alteration of splicing patterns between lung cancer and normal lung tissue was performed. We generated an exon array data set derived from matched pairs of lung cancer and normal lung tissue including both the adenocarcinoma and the squamous cell carcinoma subtypes. An enhanced workflow was developed to reliably detect differential splicing in an exon array data set. In total, 330 genes were found to be differentially spliced in non-small cell lung cancer compared to normal lung tissue. Microarray findings were validated with independent laboratory methods for CLSTN1, FN1, KIAA1217, MYO18A, NCOR2, NUMB, SLK, SYNE2, TPM1, (in total, 10 events and ADD3, which was analysed in depth. We achieved a high validation rate of 69%. Evidence was found that the activity of FOX2, the splicing factor shown to cause cancer-specific splicing patterns in breast and ovarian cancer, is not altered at the transcript level in several cancer types including lung cancer. Conclusions This study demonstrates how alternatively spliced genes can reliably be identified in a cancer data set. Our findings underline that key processes of cancer progression in NSCLC are affected by alternative splicing, which can be exploited in the search for novel targeted therapies.

  4. Creep-rupture reliability analysis

    Peralta-Duran, A.; Wirsching, P. H.


    A probabilistic approach to the correlation and extrapolation of creep-rupture data is presented. Time temperature parameters (TTP) are used to correlate the data, and an analytical expression for the master curve is developed. The expression provides a simple model for the statistical distribution of strength and fits neatly into a probabilistic design format. The analysis focuses on the Larson-Miller and on the Manson-Haferd parameters, but it can be applied to any of the TTP's. A method is developed for evaluating material dependent constants for TTP's. It is shown that optimized constants can provide a significant improvement in the correlation of the data, thereby reducing modelling error. Attempts were made to quantify the performance of the proposed method in predicting long term behavior. Uncertainty in predicting long term behavior from short term tests was derived for several sets of data. Examples are presented which illustrate the theory and demonstrate the application of state of the art reliability methods to the design of components under creep.

  5. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao; Zhiyang You; Hai Wan


    Mobile ad hoc network (MANET) is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  6. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao


    Full Text Available Mobile ad hoc network (MANET is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  7. On Bayesian System Reliability Analysis

    Soerensen Ringi, M.


    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  8. Reliability analysis of wastewater treatment plants.

    Oliveira, Sílvia C; Von Sperling, Marcos


    This article presents a reliability analysis of 166 full-scale wastewater treatment plants operating in Brazil. Six different processes have been investigated, comprising septic tank+anaerobic filter, facultative pond, anaerobic pond+facultative pond, activated sludge, upflow anaerobic sludge blanket (UASB) reactors alone and UASB reactors followed by post-treatment. A methodology developed by Niku et al. [1979. Performance of activated sludge process and reliability-based design. J. Water Pollut. Control Assoc., 51(12), 2841-2857] is used for determining the coefficients of reliability (COR), in terms of the compliance of effluent biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), total nitrogen (TN), total phosphorus (TP) and fecal or thermotolerant coliforms (FC) with discharge standards. The design concentrations necessary to meet the prevailing discharge standards and the expected compliance percentages have been calculated from the COR obtained. The results showed that few plants, under the observed operating conditions, would be able to present reliable performances considering the compliance with the analyzed standards. The article also discusses the importance of understanding the lognormal behavior of the data in setting up discharge standards, in interpreting monitoring results and compliance with the legislation.

  9. Combination of structural reliability and interval analysis

    Zhiping Qiu; Di Yang; saac Elishakoff


    In engineering applications,probabilistic reliability theory appears to be presently the most important method,however,in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs.In this paper,we developed a hybrid of probabilistic and non-probabilistic reliability theory,which describes the structural uncertain parameters as interval variables when statistical data are found insufficient.By using the interval analysis,a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper,and the traditional probabilistic theory is incorporated with the interval analysis.Moreover,the new method preserves the useful part of the traditional probabilistic reliability theory,but removes the restriction of its strict requirement on data acquisition.Example is presented to demonstrate the feasibility and validity of the proposed theory.

  10. Public Perceptions of Reliability in Examination Results in England

    He, Qingping; Boyle, Andrew; Opposs, Dennis


    Building on findings from existing qualitative research into public perceptions of reliability in examination results in England, a questionnaire was developed and administered to samples of teachers, students and employers to study their awareness of and opinions about various aspects of reliability quantitatively. Main findings from the study…

  11. Integrated Methodology for Software Reliability Analysis

    Marian Pompiliu CRISTESCU


    Full Text Available The most used techniques to ensure safety and reliability of the systems are applied together as a whole, and in most cases, the software components are usually overlooked or to little analyzed. The present paper describes the applicability of fault trees analysis software system, analysis defined as Software Fault Tree Analysis (SFTA, fault trees are evaluated using binary decision diagrams, all of these being integrated and used with help from Java library reliability.

  12. Reliability of photographic posture analysis of adolescents.

    Hazar, Zeynep; Karabicak, Gul Oznur; Tiftikci, Ugur


    [Purpose] Postural problems of adolescents needs to be evaluated accurately because they may lead to greater problems in the musculoskeletal system as they develop. Although photographic posture analysis has been frequently used, more simple and accessible methods are still needed. The purpose of this study was to investigate the inter- and intra-rater reliability of photographic posture analysis using MB-ruler software. [Subjects and Methods] Subjects were 30 adolescents (15 girls and 15 boys, mean age: 16.4±0.4 years, mean height 166.3±6.7 cm, mean weight 63.8±15.1 kg) and photographs of their habitual standing posture photographs were taken in the sagittal plane. For the evaluation of postural angles, reflective markers were placed on anatomical landmarks. For angular measurements, MB-ruler (Markus Bader- MB Software Solutions, triangular screen ruler) was used. Photographic evaluations were performed by two observers with a repetition after a week. Test-retest and inter-rater reliability evaluations were calculated using intra-class correlation coefficients (ICC). [Results] Inter-rater (ICC>0.972) and test-retest (ICC>0.774) reliability were found to be in the range of acceptable to excellent. [Conclusion] Reference angles for postural evaluation were found to be reliable and repeatable. The present method was found to be an easy and non-invasive method and it may be utilized by researchers who are in search of an alternative method for photographic postural assessments.

  13. Mechanical reliability analysis of tubes intended for hydrocarbons

    Nahal, Mourad; Khelif, Rabia [Badji Mokhtar University, Annaba (Algeria)


    Reliability analysis constitutes an essential phase in any study concerning reliability. Many industrialists evaluate and improve the reliability of their products during the development cycle - from design to startup (design, manufacture, and exploitation) - to develop their knowledge on cost/reliability ratio and to control sources of failure. In this study, we obtain results for hardness, tensile, and hydrostatic tests carried out on steel tubes for transporting hydrocarbons followed by statistical analysis. Results obtained allow us to conduct a reliability study based on resistance request. Thus, index of reliability is calculated and the importance of the variables related to the tube is presented. Reliability-based assessment of residual stress effects is applied to underground pipelines under a roadway, with and without active corrosion. Residual stress has been found to greatly increase probability of failure, especially in the early stages of pipe lifetime.

  14. Reliability Sensitivity Analysis for Location Scale Family

    洪东跑; 张海瑞


    Many products always operate under various complex environment conditions. To describe the dynamic influence of environment factors on their reliability, a method of reliability sensitivity analysis is proposed. In this method, the location parameter is assumed as a function of relevant environment variables while the scale parameter is assumed as an un- known positive constant. Then, the location parameter function is constructed by using the method of radial basis function. Using the varied environment test data, the log-likelihood function is transformed to a generalized linear expression by de- scribing the indicator as Poisson variable. With the generalized linear model, the maximum likelihood estimations of the model coefficients are obtained. With the reliability model, the reliability sensitivity is obtained. An instance analysis shows that the method is feasible to analyze the dynamic variety characters of reliability along with environment factors and is straightforward for engineering application.

  15. Analysis on testing and operational reliability of software

    ZHAO Jing; LIU Hong-wei; CUI Gang; WANG Hui-qiang


    Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.

  16. Space Mission Human Reliability Analysis (HRA) Project

    National Aeronautics and Space Administration — The purpose of this project is to extend current ground-based Human Reliability Analysis (HRA) techniques to a long-duration, space-based tool to more effectively...

  17. Telomere Q-PNA-FISH--reliable results from stochastic signals.

    Andrea Cukusic Kalajzic

    Full Text Available Structural and functional analysis of telomeres is very important for understanding basic biological functions such as genome stability, cell growth control, senescence and aging. Recently, serious concerns have been raised regarding the reliability of current telomere measurement methods such as Southern blot and quantitative polymerase chain reaction. Since telomere length is associated with age related pathologies, including cardiovascular disease and cancer, both at the individual and population level, accurate interpretation of measured results is a necessity. The telomere Q-PNA-FISH technique has been widely used in these studies as well as in commercial analysis for the general population. A hallmark of telomere Q-PNA-FISH is the wide variation among telomere signals which has a major impact on obtained results. In the present study we introduce a specific mathematical and statistical analysis of sister telomere signals during cell culture senescence which enabled us to identify high regularity in their variations. This phenomenon explains the reproducibility of results observed in numerous telomere studies when the Q-PNA-FISH technique is used. In addition, we discuss the molecular mechanisms which probably underlie the observed telomere behavior.


    Bowerman, P. N.


    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for

  19. Structural reliability analysis and reliability-based design optimization: Recent advances

    Qiu, ZhiPing; Huang, Ren; Wang, XiaoJun; Qi, WuChao


    We review recent research activities on structural reliability analysis, reliability-based design optimization (RBDO) and applications in complex engineering structural design. Several novel uncertainty propagation methods and reliability models, which are the basis of the reliability assessment, are given. In addition, recent developments on reliability evaluation and sensitivity analysis are highlighted as well as implementation strategies for RBDO.

  20. Results from the LHC Beam Dump Reliability Run

    Uythoven, J; Carlier, E; Castronuovo, F; Ducimetière, L; Gallet, E; Goddard, B; Magnin, N; Verhagen, H


    The LHC Beam Dumping System is one of the vital elements of the LHC Machine Protection System and has to operate reliably every time a beam dump request is made. Detailed dependability calculations have been made, resulting in expected rates for the different system failure modes. A 'reliability run' of the whole system, installed in its final configuration in the LHC, has been made to discover infant mortality problems and to compare the occurrence of the measured failure modes with their calculations.

  1. Multi-Disciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  2. Reliability Analysis of DOOF for Weibull Distribution

    陈文华; 崔杰; 樊小燕; 卢献彪; 相平


    Hierarchical Bayesian method for estimating the failure probability under DOOF by taking the quasi-Beta distribution as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connector as an example, the correctness of the above method through statistical analysis of electrical connector accelerated life test data was verified.

  3. Statistical analysis on reliability and serviceability of caterpillar tractor

    WANG Jinwu; LIU Jiafu; XU Zhongxiang


    For further understanding reliability and serviceability of tractor and to furnish scientific and technical theories, based on the promotion and application of it, the following experiments and statistical analysis on reliability (reliability and MTBF) serviceability (service and MTTR) of Donfanghong-1002 and Dongfanghong-802 were conducted. The result showed that the intervals of average troubles of these two tractors were 182.62 h and 160.2 h, respectively, and the weakest assembly of them was engine part.


    Gaguk Margono


    Full Text Available The purpose of this paper is to compare unidimensional reliability and multidimensional reliability of instrument students’ satisfaction as an internal costumer. Multidimensional reliability measurement is rarely used in the field of research. Multidimensional reliability is estimated by using Confirmatory Factor Analysis (CFA on the Structural Equation Model (SEM. Measurements and calculations are described in this article using instrument students’ satisfaction as an internal costumer. Survey method used in this study and sampling used simple random sampling. This instrument has been tried out to 173 students. The result is concluded that the measuringinstrument of students’ satisfaction as an internal costumer by using multidimensional reliability coefficient has higher accuracy when compared with a unidimensional reliability coefficient. Expected in advanced research used another formula multidimensional reliability, including when using SEM.

  5. Software reliability experiments data analysis and investigation

    Walker, J. Leslie; Caglayan, Alper K.


    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  6. Reliability analysis of flood defence systems

    Steenbergen, H.M.G.M.; Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for the reliability analysis of flood defence systems has been under development. This paper describes the global data requirements for the application and the setup of the models. The analysis generates the probability of system failure and the contribution of ea

  7. Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis

    Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen


    The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.

  8. Online cognition: factors facilitating reliable online neuropsychological test results.

    Feenstra, Heleen E M; Vermeulen, Ivar E; Murre, Jaap M J; Schagen, Sanne B


    Online neuropsychological test batteries could allow for large-scale cognitive data collection in clinical studies. However, the few online neuropsychological test batteries that are currently available often still require supervision or lack proper psychometric evaluation. In this paper, we have outlined prerequisites for proper development and use of online neuropsychological tests, with the focus on reliable measurement of cognitive function in an unmonitored setting. First, we identified several technical, contextual, and psychological factors that should be taken into account in order to facilitate reliable test results of online tests in the unmonitored setting. Second, we outlined a methodology of quality assurance needed in order to obtain reliable cognitive data in the long run. Based on factors that distinguish the online unmonitored test setting from the traditional face-to-face setting, we provide a set of basic requirements and suggestions for optimal development and use of unmonitored online neuropsychological tests, including suggestions on acquiring reliability, validity, and norm scores. When properly addressing factors that could hamper reliable test results during development and use, online neuropsychological tests could aid large-scale data collection for clinical studies in the future. Investment in both proper development of online neuropsychological test platforms and the performance of accompanying psychometric studies is currently required.



    performance of any structural system be eva ... by the Joint crete slabs, bending, shear, deflection, reliability, design codes. ement such as ... could be sensitive to this distribution. Table 1: ..... Ang, A. H-S and Tang, W. H. Probability Concepts in.

  10. Reliability Analysis of a Steel Frame

    M. Sýkora


    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  11. Evaluating some Reliability Analysis Methodologies in Seismic Design

    A. E. Ghoulbzouri


    Full Text Available Problem statement: Accounting for uncertainties that are present in geometric and material data of reinforced concrete buildings is performed in this study within the context of performance based seismic engineering design. Approach: Reliability of the expected performance state is assessed by using various methodologies based on finite element nonlinear static pushover analysis and specialized reliability software package. Reliability approaches that were considered included full coupling with an external finite element code and surface response based methods in conjunction with either first order reliability method or importance sampling method. Various types of probability distribution functions that model parameters uncertainties were introduced. Results: The probability of failure according to the used reliability analysis method and to the selected distribution of probabilities was obtained. Convergence analysis of the importance sampling method was performed. The required duration of analysis as function of the used reliability method was evaluated. Conclusion/Recommendations: It was found that reliability results are sensitive to the used reliability analysis method and to the selected distribution of probabilities. Durations of analysis for coupling methods were found to be higher than those associated to surface response based methods; one should however include time needed to derive these lasts. For the reinforced concrete building considered in this study, it was found that significant variations exist between all the considered reliability methodologies. The full coupled importance sampling method is recommended, but the first order reliability method applied on a surface response model can be used with good accuracy. Finally, the distributions of probabilities should be carefully identified since giving the mean and the standard deviation were found to be insufficient.

  12. Event/Time/Availability/Reliability-Analysis Program

    Viterna, L. A.; Hoffman, D. J.; Carr, Thomas


    ETARA is interactive, menu-driven program that performs simulations for analysis of reliability, availability, and maintainability. Written to evaluate performance of electrical power system of Space Station Freedom, but methodology and software applied to any system represented by block diagram. Program written in IBM APL.

  13. Seismic reliability analysis of large electric power systems

    何军; 李杰


    Based on the De. Morgan laws and Boolean simplification, a recursive decomposition method is introduced in this paper to identity the main exclusive safe paths and failed paths of a network. The reliability or the reliability bound of a network can be conveniently expressed as the summation of the joint probabilities of these paths. Under the multivariate normal distribution assumption, a conditioned reliability index method is developed to evaluate joint probabilities of various exclusive safe paths and failed paths, and, finally, the seismic reliability or the reliability bound of an electric power system.Examples given in thc paper show that the method is very simple and provides accurate results in the seismic reliability analysis.

  14. Reliability analysis of DOOF for Weibull distribution

    陈文华; 崔杰; 樊晓燕; 卢献彪; 相平


    Hierarchical Bayesian method for estimating the failure probability Pi under DOOF by taking the quasi-Beta distribution B(pi-1 , 1,1, b ) as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connectoras an example, the correctness of the above method through statistical analysis of electrical connector acceler-ated life test data was verified.

  15. Reliability and safety analysis of redundant vehicle management computer system

    Shi Jian; Meng Yixuan; Wang Shaoping; Bian Mengmeng; Yan Dungong


    Redundant techniques are widely adopted in vehicle management computer (VMC) to ensure that VMC has high reliability and safety. At the same time, it makes VMC have special char-acteristics, e.g., failure correlation, event simultaneity, and failure self-recovery. Accordingly, the reliability and safety analysis to redundant VMC system (RVMCS) becomes more difficult. Aimed at the difficulties in RVMCS reliability modeling, this paper adopts generalized stochastic Petri nets to establish the reliability and safety models of RVMCS. Then this paper analyzes RVMCS oper-ating states and potential threats to flight control system. It is verified by simulation that the reli-ability of VMC is not the product of hardware reliability and software reliability, and the interactions between hardware and software faults can reduce the real reliability of VMC obviously. Furthermore, the failure undetected states and false alarming states inevitably exist in RVMCS due to the influences of limited fault monitoring coverage and false alarming probability of fault mon-itoring devices (FMD). RVMCS operating in some failure undetected states will produce fatal threats to the safety of flight control system. RVMCS operating in some false alarming states will reduce utility of RVMCS obviously. The results abstracted in this paper can guide reliable VMC and efficient FMD designs. The methods adopted in this paper can also be used to analyze other intelligent systems’ reliability.


    LI Hong-shuang; L(U) Zhen-zhou; YUE Zhu-feng


    Support vector machine (SVM) was introduced to analyze the reliability of the implicit performance function, which is difficult to implement by the classical methods such as the first order reliability method (FORM) and the Monte Carlo simulation (MCS). As a classification method where the underlying structural risk minimization inference rule is employed, SVM possesses excellent learning capacity with a small amount of information and good capability of generalization over the complete data. Hence,two approaches, i.e., SVM-based FORM and SVM-based MCS, were presented for the structural reliability analysis of the implicit limit state function. Compared to the conventional response surface method (RSM) and the artificial neural network (ANN), which are widely used to replace the implicit state function for alleviating the computation cost,the more important advantages of SVM are that it can approximate the implicit function with higher precision and better generalization under the small amount of information and avoid the "curse of dimensionality". The SVM-based reliability approaches can approximate the actual performance function over the complete sampling data with the decreased number of the implicit performance function analysis (usually finite element analysis), and the computational precision can satisfy the engineering requirement, which are demonstrated by illustrations.

  17. Human reliability analysis of control room operators

    Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)


    Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)

  18. Reliability Analysis of Elasto-Plastic Structures


    . Failure of this type of system is defined either as formation of a mechanism or by failure of a prescribed number of elements. In the first case failure is independent of the order in which the elements fail, but this is not so by the second definition. The reliability analysis consists of two parts...... are described and the two definitions of failure can be used by the first formulation, but only the failure definition based on formation of a mechanism by the second formulation. The second part of the reliability analysis is an estimate of the failure probability for the structure on the basis...... are obtained if the failure mechanisms are used. Lower bounds can be calculated on the basis of series systems where the elements are the non-failed elements in a non-failed structure (see Augusti & Baratta [3])....

  19. Bridging Resilience Engineering and Human Reliability Analysis

    Ronald L. Boring


    There has been strong interest in the new and emerging field called resilience engineering. This field has been quick to align itself with many existing safety disciplines, but it has also distanced itself from the field of human reliability analysis. To date, the discussion has been somewhat one-sided, with much discussion about the new insights afforded by resilience engineering. This paper presents an attempt to address resilience engineering from the perspective of human reliability analysis (HRA). It is argued that HRA shares much in common with resilience engineering and that, in fact, it can help strengthen nascent ideas in resilience engineering. This paper seeks to clarify and ultimately refute the arguments that have served to divide HRA and resilience engineering.

  20. Seismic reliability analysis of urban water distribution network

    Li Jie; Wei Shulin; Liu Wei


    An approach to analyze the seismic reliability of water distribution networks by combining a hydraulic analysis with a first-order reliability method (FORM), is proposed in this paper.The hydraulic analysis method for normal conditions is modified to accommodate the special conditions necessary to perform a seismic hydraulic analysis. In order to calculate the leakage area and leaking flow of the pipelines in the hydraulic analysis method, a new leakage model established from the seismic response analysis of buried pipelines is presented. To validate the proposed approach, a network with 17 nodes and 24 pipelines is investigated in detail. The approach is also applied to an actual project consisting of 463 nodes and 767pipelines. Thee results show that the proposed approach achieves satisfactory results in analyzing the seismic reliability of large-scale water distribution networks.

  1. Representative Sampling for reliable data analysis

    Petersen, Lars; Esbensen, Kim Harry


    regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...... analysis (“data” do not exist in isolation of their provenance). The Total Sampling Error (TSE) is by far the dominating contribution to all analytical endeavours, often 100+ times larger than the Total Analytical Error (TAE).We present a summarizing set of only seven Sampling Unit Operations (SUOs...

  2. The quantitative failure of human reliability analysis

    Bennett, C.T.


    This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.

  3. A Passive System Reliability Analysis for a Station Blackout

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David; Sofu, Tanju; Grelle, Austin


    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passive system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.

  4. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    Berzonskis, Arvydas; Sørensen, John Dalsgaard


    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....... of operation and maintenance. The manufacturing of casted drivetrain components, like the main shaft of the wind turbine, commonly result in many smaller defects through the volume of the component with sizes that depend on the manufacturing method. This paper considers the effect of the initial defect present...

  5. Some Results on the Overall Reliability of Undirected Graphs.


    Al A SATYANARAYANA , M K CHANG , Z S KHALIL N000iG 75-C OT8l UNCLASSIFIED ORC-81-2 NL ’MENOMONEfflfflsonf Nonn- 001 MICROCOPY 1?I Sot[JilION I l.I...HIAR I SOME RESULTS ON THE OVERALL RELIABILITY OF UNDIRECTED GRAPHS by A. Satyanarayana and Hark K. Chang Operations Research Center University of...liblity:J OprionseeGraphCne UnivCerity em of Californi NR 042 238 (Sreey ASRClfrn) 92 11 CWVILUOOFIEro ONA ME V D 6 A O SSE ~ ~Off147 of Naa Ree@arch

  6. Recent advances in computational structural reliability analysis methods

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.


    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  7. Assessing the Reliability of Geoelectric Imaging Results for Permafrost Investigations

    Marescot, L.; Loke, M.; Abbet, D.; Delaloye, R.; Hauck, C.; Hilbich, C.; Lambiel, C.; Reynard, E.


    The effects of global climate change on mountain permafrost are of increasing concern; warming thaws permafrost, thereby increasing the risk of slope instabilities. Consequently, knowledge of the extent and location of permafrost are important for construction and other geotechnical and land-management activities in mountainous areas. Geoelectric imaging is a useful tool for mapping and characterizing permafrost occurrences. To overcome the generally poor electrical contacts in the active layer, geoelectric surveys usually involve coupling the electrodes to the ground via sponges soaked in salt water. The data are processed and inverted in terms of resistivity models of the subsurface. To monitor the evolution of mountain permafrost, time-lapse geoelectric imaging may be employed. A challenging aspect in geoelectric imaging of permafrost is the very large resistivity contrast between frozen and unfrozen material. Such a contrast makes inversion and interpretation difficult. To assess whether features at depth are required by the data or are artifacts of the inversion process, the reliability of models needs to be evaluated. We use two different approaches to assess the reliability of resistivity images in permafrost investigations: (i) depth of investigation (DOI) and (ii) resolution matrix maps. To compute the DOI, two inversions of the same data set using quite different reference resistivity models are carried out. At locations where the resistivity is well constrained by the data, the inversions yield the same results. At other locations, the inversions yield different values that are controlled by the reference models. The resolution matrix, which is based on the sensitivity matrix calculated during the inversion, quantifies the degree to which each resistivity cell in the model can be resolved by the data. Application of these two approaches to field data acquired in the Swiss Alps and Jura Mountains suggests that it is very difficult to obtain dependable

  8. Reliability analysis of ceramic matrix composite laminates

    Thomas, David J.; Wetherhold, Robert C.


    At a macroscopic level, a composite lamina may be considered as a homogeneous orthotropic solid whose directional strengths are random variables. Incorporation of these random variable strengths into failure models, either interactive or non-interactive, allows for the evaluation of the lamina reliability under a given stress state. Using a non-interactive criterion for demonstration purposes, laminate reliabilities are calculated assuming previously established load sharing rules for the redistribution of load as the failure of laminae occur. The matrix cracking predicted by ACK theory is modeled to allow a loss of stiffness in the fiber direction. The subsequent failure in the fiber direction is controlled by a modified bundle theory. Results using this modified bundle model are compared with previous models which did not permit separate consideration of matrix cracking, as well as to results obtained from experimental data.

  9. Representative Sampling for reliable data analysis

    Petersen, Lars; Esbensen, Kim Harry


    The Theory of Sampling (TOS) provides a description of all errors involved in sampling of heterogeneous materials as well as all necessary tools for their evaluation, elimination and/or minimization. This tutorial elaborates on—and illustrates—selected central aspects of TOS. The theoretical...... regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...

  10. Reliability Analysis of Adhesive Bonded Scarf Joints

    Kimiaeifar, Amin; Toft, Henrik Stensgaard; Lund, Erik;


    A probabilistic model for the reliability analysis of adhesive bonded scarfed lap joints subjected to static loading is developed. It is representative for the main laminate in a wind turbine blade subjected to flapwise bending. The structural analysis is based on a three dimensional (3D) finite...... the FEA model, and a sensitivity analysis on the influence of various geometrical parameters and material properties on the maximum stress is conducted. Because the yield behavior of many polymeric structural adhesives is dependent on both deviatoric and hydrostatic stress components, different ratios...... of the compressive to tensile adhesive yield stresses in the failure criterion are considered. It is shown that the chosen failure criterion, the scarf angle and the load are significant for the assessment of the probability of failure....

  11. Integrated Reliability and Risk Analysis System (IRRAS)

    Russell, K D; McKay, M K; Sattison, M.B. Skinner, N.L.; Wood, S T [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rasmuson, D M [Nuclear Regulatory Commission, Washington, DC (United States)


    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance.

  12. Advancing Usability Evaluation through Human Reliability Analysis

    Ronald L. Boring; David I. Gertman


    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.

  13. Notes on numerical reliability of several statistical analysis programs

    Landwehr, J.M.; Tasker, Gary D.


    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  14. On reliability analysis of multi-categorical forecasts

    J. Bröcker


    Full Text Available Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.

  15. Statistical models and methods for reliability and survival analysis

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo


    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  16. Reliability Analysis of Tubular Joints in Offshore Structures

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard


    Reliability analysis of single tubular joints and offshore platforms with tubular joints is" presented. The failure modes considered are yielding, punching, buckling and fatigue failure. Element reliability as well as systems reliability approaches are used and illustrated by several examples....... Finally, optimal design of tubular.joints with reliability constraints is discussed and illustrated by an example....

  17. Software Architecture Reliability Analysis using Failure Scenarios

    Tekinerdogan, B.; Sözer, Hasan; Aksit, Mehmet

    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components

  18. Reliability of the Emergency Severity Index: Meta-analysis

    Amir Mirhaghi


    Full Text Available Objectives: Although triage systems based on the Emergency Severity Index (ESI have many advantages in terms of simplicity and clarity, previous research has questioned their reliability in practice. Therefore, the aim of this meta-analysis was to determine the reliability of ESI triage scales. Methods: This metaanalysis was performed in March 2014. Electronic research databases were searched and articles conforming to the Guidelines for Reporting Reliability and Agreement Studies were selected. Two researchers independently examined selected abstracts. Data were extracted in the following categories: version of scale (latest/older, participants (adult/paediatric, raters (nurse, physician or expert, method of reliability (intra/inter-rater, reliability statistics (weighted/unweighted kappa and the origin and publication year of the study. The effect size was obtained by the Z-transformation of reliability coefficients. Data were pooled with random-effects models and a meta-regression was performed based on the method of moments estimator. Results: A total of 19 studies from six countries were included in the analysis. The pooled coefficient for the ESI triage scales was substantial at 0.791 (95% confidence interval: 0.787‒0.795. Agreement was higher with the latest and adult versions of the scale and among expert raters, compared to agreement with older and paediatric versions of the scales and with other groups of raters, respectively. Conclusion: ESI triage scales showed an acceptable level of overall reliability. However, ESI scales require more development in order to see full agreement from all rater groups. Further studies concentrating on other aspects of reliability assessment are needed.


    Ronald L. Boring; David I. Gertman; Katya Le Blanc


    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  20. Human Reliability Analysis for Small Modular Reactors

    Ronald L. Boring; David I. Gertman


    Because no human reliability analysis (HRA) method was specifically developed for small modular reactors (SMRs), the application of any current HRA method to SMRs represents tradeoffs. A first- generation HRA method like THERP provides clearly defined activity types, but these activity types do not map to the human-system interface or concept of operations confronting SMR operators. A second- generation HRA method like ATHEANA is flexible enough to be used for SMR applications, but there is currently insufficient guidance for the analyst, requiring considerably more first-of-a-kind analyses and extensive SMR expertise in order to complete a quality HRA. Although no current HRA method is optimized to SMRs, it is possible to use existing HRA methods to identify errors, incorporate them as human failure events in the probabilistic risk assessment (PRA), and quantify them. In this paper, we provided preliminary guidance to assist the human reliability analyst and reviewer in understanding how to apply current HRA methods to the domain of SMRs. While it is possible to perform a satisfactory HRA using existing HRA methods, ultimately it is desirable to formally incorporate SMR considerations into the methods. This may require the development of new HRA methods. More practicably, existing methods need to be adapted to incorporate SMRs. Such adaptations may take the form of guidance on the complex mapping between conventional light water reactors and small modular reactors. While many behaviors and activities are shared between current plants and SMRs, the methods must adapt if they are to perform a valid and accurate analysis of plant personnel performance in SMRs.

  1. [Qualitative analysis: theory, steps and reliability].

    Minayo, Maria Cecília de Souza


    This essay seeks to conduct in-depth analysis of qualitative research, based on benchmark authors and the author's own experience. The hypothesis is that in order for an analysis to be considered reliable, it needs to be based on structuring terms of qualitative research, namely the verbs 'comprehend' and 'interpret', and the nouns 'experience', 'common sense' and 'social action'. The 10 steps begin with the construction of the scientific object by its inclusion on the national and international agenda; the development of tools that make the theoretical concepts tangible; conducting field work that involves the researcher empathetically with the participants in the use of various techniques and approaches, making it possible to build relationships, observations and a narrative with perspective. Finally, the author deals with the analysis proper, showing how the object, which has already been studied in all the previous steps, should become a second-order construct, in which the logic of the actors in their diversity and not merely their speech predominates. The final report must be a theoretic, contextual, concise and clear narrative.

  2. Task Decomposition in Human Reliability Analysis

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory


    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.

  3. Reliability analysis of retaining walls with multiple failure modes

    张道兵; 孙志彬; 朱川曲


    In order to reduce the errors of the reliability of the retaining wall structure in the establishment of function, in the estimation of parameter and algorithm, firstly, two new reliability and stability models of anti-slipping and anti-overturning based on the upper-bound theory of limit analysis were established, and two kinds of failure modes were regarded as a series of systems with multiple correlated failure modes. Then, statistical characteristics of parameters of the retaining wall structure were inferred by maximal entropy principle. At last, the structural reliabilities of single failure mode and multiple failure modes were calculated by Monte Carlo method in MATLAB and the results were compared and analyzed on the sensitivity. It indicates that this method, with a high precision, is not only easy to program and quick in calculation, but also without the limit of nonlinear functions and non-normal random variables. And the results calculated by this method which applies both the limit analysis theory, maximal entropy principle and Monte Carlo method into analyzing the reliability of the retaining wall structures is more scientific, accurate and reliable, in comparison with those calculated by traditional method.

  4. Classification using least squares support vector machine for reliability analysis

    Zhi-wei GUO; Guang-chen BAI


    In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.

  5. Reliability analysis of an associated system

    陈长杰; 魏一鸣; 蔡嗣经


    Based on engineering reliability of large complex system and distinct characteristic of soft system, some new conception and theory on the medium elements and the associated system are created. At the same time, the reliability logic model of associated system is provided. In this paper, through the field investigation of the trial operation, the engineering reliability of the paste fill system in No.2 mine of Jinchuan Non-ferrous Metallic Corporation is analyzed by using the theory of associated system.

  6. Sensitivity Analysis for the System Reliability Function


    reliabilities. The unique feature of the approach is that stunple data collected on K inde-ndent replications using a specified component reliability % v &:•r...Carlo method. The polynomial time algorithm of Agrawaw Pad Satyanarayana (104) fIr the exact reliability computaton for seres- allel systems exemplifies...consideration. As an example for the s-t connectedness problem, let denote -7- edge-disjoint minimal s-t paths of G and let V , denote edge-disjoint

  7. Solving reliability analysis problems in the polar space

    Ghasem Ezzati; Musa Mammadov; Siddhivinayak Kulkarni


    An optimization model that is widely used in engineering problems is Reliability-Based Design Optimization (RBDO). Input data of the RBDO is non-deterministic and constraints are probabilistic. The RBDO aims at minimizing cost ensuring that reliability is at least an accepted level. Reliability analysis is an important step in two-level RBDO approaches. Although many methods have been introduced to apply in reliability analysis loop of the RBDO, there are still many drawbacks in their efficie...

  8. Reliability Analysis and Optimal Design of Monolithic Vertical Wall Breakwaters

    Sørensen, John Dalsgaard; Burcharth, Hans F.; Christiani, E.


    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified and relia......Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified...

  9. Reliability in Cross-National Content Analysis.

    Peter, Jochen; Lauf, Edmund


    Investigates how coder characteristics such as language skills, political knowledge, coding experience, and coding certainty affected inter-coder and coder-training reliability. Shows that language skills influenced both reliability types. Suggests that cross-national researchers should pay more attention to cross-national assessments of…

  10. Software architecture reliability analysis using failure scenarios

    Tekinerdogan, Bedir; Sozer, Hasan; Aksit, Mehmet


    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components a

  11. Reliability Analysis of Slope Stability by Central Point Method

    Li, Chunge; WU Congliang


    Given uncertainty and variability of the slope stability analysis parameter, the paper proceed from the perspective of probability theory and statistics based on the reliability theory. Through the central point method of reliability analysis, performance function about the reliability of slope stability analysis is established. What’s more, the central point method and conventional limit equilibrium methods do comparative analysis by calculation example. The approach’s numerical ...

  12. Individual Differences in Human Reliability Analysis

    Jeffrey C. Joe; Ronald L. Boring


    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research has shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.

  13. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin


    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  14. Generating function approach to reliability analysis of structural systems


    The generating function approach is an important tool for performance assessment in multi-state systems. Aiming at strength reliability analysis of structural systems, generating function approach is introduced and developed. Static reliability models of statically determinate, indeterminate systems and fatigue reliability models are built by constructing special generating functions, which are used to describe probability distributions of strength (resistance), stress (load) and fatigue life, by defining composite operators of generating functions and performance structure functions thereof. When composition operators are executed, computational costs can be reduced by a big margin by means of collecting like terms. The results of theoretical analysis and numerical simulation show that the generating function approach can be widely used for probability modeling of large complex systems with hierarchical structures due to the unified form, compact expression, computer program realizability and high universality. Because the new method considers twin loads giving rise to component failure dependency, it can provide a theoretical reference and act as a powerful tool for static, dynamic reliability analysis in civil engineering structures and mechanical equipment systems with multi-mode damage coupling.


    Popescu V.S.


    Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.

  16. Reliability Analysis on English Writing Test of SHSEE in Shanghai

    黄玉麒; 黄芳


    As a subjective test, the validity of writing test is acceptable. What about the reliability? Writing test occupies a special position in the senior high school entrance examination (SHSEE for short). It is important to ensure its reliability. By the analysis of recent years’English writing items in SHSEE, the author offer suggestions on how to guarantee the reliability of writing tests.

  17. Analysis on Some of Software Reliability Models


    Software reliability & maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper,which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0is supported by seven soft ware reliability models and four software maintainability models. Numerical characteristicsfor all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are alsogiven in the paper.

  18. Reliability analysis method for slope stability based on sample weight

    Zhi-gang YANG


    Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.

  19. System reliability analysis for kinematic performance of planar mechanisms

    ZHANG YiMin; HUANG XianZhen; ZHANG XuFang; HE XiangDong; WEN BangChun


    Based on the reliability and mechanism kinematic accuracy theories, we propose a general methodology for system reliability analysis of kinematic performance of planar mechanisms. The loop closure equations are used to estimate the kinematic performance errors of planar mechanisms. Reliability and system reliability theories are introduced to develop the limit state functions (LSF) for failure of kinematic performance qualities. The statistical fourth moment method and the Edgeworth series technique are used on system reliability analysis for kinematic performance of planar mechanisms, which relax the restrictions of probability distribution of design variables. Finally, the practicality, efficiency and accuracy of the proposed method are demonstrated by numerical examples.

  20. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Ronald Laurids Boring


    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  1. Optimization Based Efficiencies in First Order Reliability Analysis

    Peck, Jeffrey A.; Mahadevan, Sankaran


    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  2. Credible Mechanism for More Reliable SearchEngine Results

    Mohammed Abdel Razek


    Full Text Available the number of websites on the Internet is growing randomly, thanks to HTML language. Consequently, a diversity of information is available on the Web, however, sometimes the content of it may be neither valuable nor trusted. This leads to a problem of a credibility of the existing information on these Websites. This paper investigates aspects affecting on the Websites credibility and then uses them along with dominant meaning of the query for improving information retrieval capabilities and to effectively manage contents. It presents a design and development of a credible mechanism that searches Web search engine and then ranks sites according to its reliability. Our experiments show that the credibility terms on the Websites can affect the ranking of the Web search engine and greatly improves retrieval effectiveness.

  3. Reliability estimation in a multilevel confirmatory factor analysis framework.

    Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J


    Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.

  4. Application of Reliability Analysis for Optimal Design of Monolithic Vertical Wall Breakwaters

    Burcharth, H. F.; Sørensen, John Dalsgaard; Christiani, E.


    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of some of the most important failure modes are described. The failures are sliding and slip surface failure of a rubble mound and a clay foundation. Relevant design...... variables are identified and a reliability-based design optimization procedure is formulated. Results from an illustrative example are given....

  5. Analysis of Reliability of CET Band4



    CET Band 4 has been carried out for more than a decade. It becomes so large- scaled, so popular and so influential that many testing experts and foreign language teachers are willing to do research on it. In this paper, I will mainly analyse its reliability from the perspective of writing test and speaking test.

  6. Bypassing BDD Construction for Reliability Analysis

    Williams, Poul Frederick; Nikolskaia, Macha; Rauzy, Antoine


    In this note, we propose a Boolean Expression Diagram (BED)-based algorithm to compute the minimal p-cuts of boolean reliability models such as fault trees. BEDs make it possible to bypass the Binary Decision Diagram (BDD) construction, which is the main cost of fault tree assessment....

  7. Reliability Analysis of an Offshore Structure

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Rackwitz, R.


    A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittie fradure and crack through the tubular roerober walls. The stochastic modeiling is described. The hot spot stress speetral moments as fundion of the stochastic...

  8. Reliability analysis for new technology-based transmitters

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Charpentier, Dominique [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France)


    The reliability analysis of new technology-based transmitters has to deal with specific issues: various interactions between both material elements and functions, undefined behaviours under faulty conditions, several transmitted data, and little reliability feedback. To handle these particularities, a '3-step' model is proposed, based on goal tree-success tree (GTST) approaches to represent both the functional and material aspects, and includes the faults and failures as a third part for supporting reliability analyses. The behavioural aspects are provided by relationship matrices, also denoted master logic diagrams (MLD), with stochastic values which represent direct relationships between system elements. Relationship analyses are then proposed to assess the effect of any fault or failure on any material element or function. Taking these relationships into account, the probabilities of malfunction and failure modes are evaluated according to time. Furthermore, uncertainty analyses tend to show that even if the input data and system behaviour are not well known, these previous results can be obtained in a relatively precise way. An illustration is provided by a case study on an infrared gas transmitter. These properties make the proposed model and corresponding reliability analyses especially suitable for intelligent transmitters (or 'smart sensors').

  9. Using functional analysis diagrams to improve product reliability and cost

    Ioannis Michalakoudis


    Full Text Available Failure mode and effects analysis and value engineering are well-established methods in the manufacturing industry, commonly applied to optimize product reliability and cost, respectively. Both processes, however, require cross-functional teams to identify and evaluate the product/process functions and are resource-intensive, hence their application is mostly limited to large organizations. In this article, we present a methodology involving the concurrent execution of failure mode and effects analysis and value engineering, assisted by a set of hierarchical functional analysis diagram models, along with the outcomes of a pilot application in a UK-based manufacturing small and medium enterprise. Analysis of the results indicates that this new approach could significantly enhance the resource efficiency and effectiveness of both failure mode and effects analysis and value engineering processes.

  10. DFTCalc: Reliability centered maintenance via fault tree analysis (tool paper)

    Guck, Dennis; Spel, Jip; Stoelinga, Mariëlle Ida Antoinette; Butler, Michael; Conchon, Sylvain; Zaïdi, Fatiha


    Reliability, availability, maintenance and safety (RAMS) analysis is essential in the evaluation of safety critical systems like nuclear power plants and the railway infrastructure. A widely used methodology within RAMS analysis are fault trees, representing failure propagations throughout a system.

  11. DFTCalc: reliability centered maintenance via fault tree analysis (tool paper)

    Guck, Dennis; Spel, Jip; Stoelinga, Mariëlle; Butler, Michael; Conchon, Sylvain; Zaïdi, Fatiha


    Reliability, availability, maintenance and safety (RAMS) analysis is essential in the evaluation of safety critical systems like nuclear power plants and the railway infrastructure. A widely used methodology within RAMS analysis are fault trees, representing failure propagations throughout a system.

  12. Reliability analysis of PLC safety equipment

    Yu, J.; Kim, J. Y. [Chungnam Nat. Univ., Daejeon (Korea, Republic of)


    FMEA analysis for Nuclear Safety Grade PLC, failure rate prediction for nuclear safety grade PLC, sensitivity analysis for components failure rate of nuclear safety grade PLC, unavailability analysis support for nuclear safety system.

  13. Analytical reliability analysis of soil-water characteristic curve

    Johari A.


    Full Text Available The Soil Water Characteristic Curve (SWCC, also known as the soil water-retention curve, is an important part of any constitutive relationship for unsaturated soils. Deterministic assessment of SWCC has received considerable attention in the past few years. However the uncertainties of the parameters which affect SWCC dictate that the problem is of a probabilistic nature rather than being deterministic. In this research, a Gene Expression Programming (GEP-based SWCC model is employed to assess the reliability of SWCC. For this purpose, the Jointly Distributed Random Variables (JDRV method is used as an analytical method for reliability analysis. All input parameters of the model which are initial void ratio, initial water content, silt and clay contents are set to be stochastic and modelled using truncated normal probability density functions. The results are compared with those of the Monte Carlo (MC simulation. It is shown that the initial water content is the most effective parameter in SWCC.

  14. Earth slope reliability analysis under seismic loadings using neural network

    PENG Huai-sheng; DENG Jian; GU De-sheng


    A new method was proposed to cope with the earth slope reliability problem under seismic loadings. The algorithm integrates the concepts of artificial neural network, the first order second moment reliability method and the deterministic stability analysis method of earth slope. The performance function and its derivatives in slope stability analysis under seismic loadings were approximated by a trained multi-layer feed-forward neural network with differentiable transfer functions. The statistical moments calculated from the performance function values and the corresponding gradients using neural network were then used in the first order second moment method for the calculation of the reliability index in slope safety analysis. Two earth slope examples were presented for illustrating the applicability of the proposed approach. The new method is effective in slope reliability analysis. And it has potential application to other reliability problems of complicated engineering structure with a considerably large number of random variables.

  15. Strength Reliability Analysis of Stiffened Cylindrical Shells Considering Failure Correlation

    Xu Bai; Liping Sun; Wei Qin; Yongkun Lv


    The stiffened cylindrical shell is commonly used for the pressure hull of submersibles and the legs of offshore platforms. There are various failure modes because of uncertainty with the structural size and material properties, uncertainty of the calculation model and machining errors. Correlations among failure modes must be considered with the structural reliability of stiffened cylindrical shells. However, the traditional method cannot consider the correlations effectively. The aim of this study is to present a method of reliability analysis for stiffened cylindrical shells which considers the correlations among failure modes. Firstly, the joint failure probability calculation formula of two related failure modes is derived through use of the 2D joint probability density function. Secondly, the full probability formula of the tandem structural system is given with consideration to the correlations among failure modes. At last, the accuracy of the system reliability calculation is verified through use of the Monte Carlo simulation. Result of the analysis shows the failure probability of stiffened cylindrical shells can be gained through adding the failure probability of each mode.

  16. Design and Analysis for Reliability of Wireless Sensor Network

    Yongxian Song


    Full Text Available Reliability is an important performance indicator of wireless sensor network, to some application fields, which have high demands in terms of reliability, it is particularly important to ensure reliability of network. At present, the reliability research findings of wireless sensor network are much more at home and abroad, but they mainly improve network reliability from the networks topology, reliable protocol and application layer fault correction and so on, and reliability of network is comprehensive considered from hardware and software aspects is much less. This paper adopts bionic hardware to implement bionic reconfigurable of wireless sensor network nodes, so as to the nodes have able to change their structure and behavior autonomously and dynamically, in the cases of the part hardware are failure, and the nodes can realize bionic self-healing. Secondly, Markov state diagram and probability analysis method are adopted to realize solution of functional model for reliability, establish the relationship between reliability and characteristic parameters for sink nodes, analyze sink nodes reliability model, so as to determine the reasonable parameters of the model and ensure reliability of sink nodes.

  17. Reliability-Analysis of Offshore Structures using Directional Loads

    Sørensen, John Dalsgaard; Bloch, Allan; Sterndorff, M. J.


    Reliability analyses of offshore structures such as steel jacket platforms are usually performed using stochastic models for the wave loads based on the omnidirectional wave height. However, reliability analyses with respect to structural failure modes such as total collapse of a structure...... heights from the central part of the North Sea. It is described how the stochastic model for the directional wave heights can be used in a reliability analysis where total collapse of offshore steel jacket platforms is considered....


    C.L. Liu; Z.Z. Lü; Y.L. Xu


    Reliability analysis methods based on the linear damage accumulation law (LDAL) and load-life interference model are studied in this paper. According to the equal probability rule, the equivalent loads are derived, and the reliability analysis method based on load-life interference model and recurrence formula is constructed. In conjunction with finite element analysis (FEA) program, the reliability of an aero engine turbine disk under low cycle fatigue (LCF) condition has been analyzed. The results show the turbine disk is safety and the above reliability analysis methods are feasible.

  19. A reliability analysis of the revised competitiveness index.

    Harris, Paul B; Houston, John M


    This study examined the reliability of the Revised Competitiveness Index by investigating the test-retest reliability, interitem reliability, and factor structure of the measure based on a sample of 280 undergraduates (200 women, 80 men) ranging in age from 18 to 28 years (M = 20.1, SD = 2.1). The findings indicate that the Revised Competitiveness Index has high test-retest reliability, high inter-item reliability, and a stable factor structure. The results support the assertion that the Revised Competitiveness Index assesses competitiveness as a stable trait rather than a dynamic state.

  20. Modified Bayesian Kriging for Noisy Response Problems for Reliability Analysis


    surrogate model is used to do the MCS prediction for the reliability analysis for the sampling- based reliability-based design optimization ( RBDO ) method...D., Choi, K. K., Noh, Y., & Zhao, L. (2011). Sampling-based stochastic sensitivity analysis using score functions for RBDO problems with correlated...K., and Zhao, L., (2011). Sampling- based RBDO using the stochastic sensitivity analysis and dynamic Kriging method. Structural and

  1. Reliability analysis of large, complex systems using ASSIST

    Johnson, Sally C.


    The SURE reliability analysis program is discussed as well as the ASSIST model generation program. It is found that semi-Markov modeling using model reduction strategies with the ASSIST program can be used to accurately solve problems at least as complex as other reliability analysis tools can solve. Moreover, semi-Markov analysis provides the flexibility needed for modeling realistic fault-tolerant systems.


    Hong-Zhong Huang


    Full Text Available Engineering design under uncertainty has gained considerable attention in recent years. A great multitude of new design optimization methodologies and reliability analysis approaches are put forth with the aim of accommodating various uncertainties. Uncertainties in practical engineering applications are commonly classified into two categories, i.e., aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty arises because of unpredictable variation in the performance and processes of systems, it is irreducible even adding more data or knowledge. On the other hand, epistemic uncertainty stems from lack of knowledge of the system due to limited data, measurement limitations, or simplified approximations in modeling system behavior and it can be reduced by obtaining more data or knowledge. More specifically, aleatory uncertainty is naturally represented by a statistical distribution and its associated parameters can be characterized by sufficient data. If, however, the data is limited and can be quantified in a statistical sense, epistemic uncertainty can be considered as an alternative tool in such a situation. Of the several optional treatments for epistemic uncertainty, possibility theory and evidence theory have proved to be the most computationally efficient and stable for reliability analysis and engineering design optimization. This study first attempts to provide a better understanding of uncertainty in engineering design by giving a comprehensive overview of its classifications, theories and design considerations. Then a review is conducted of general topics such as the foundations and applications of possibility theory and evidence theory. This overview includes the most recent results from theoretical research, computational developments and performance improvement of possibility theory and evidence theory with an emphasis on revealing the capability and characteristics of quantifying uncertainty from different perspectives

  3. Reliability Distribution of Numerical Control Lathe Based on Correlation Analysis

    Xiaoyan Qi; Guixiang Shen; Yingzhi Zhang; Shuguang Sun; Bingkun Chen


    Combined Reliability distribution with correlation analysis, a new method has been proposed to make Reliability distribution where considering the elements about structure correlation and failure correlation of subsystems. Firstly, we make a sequence for subsystems by means of TOPSIS which comprehends the considerations of Reliability allocation, and introducing a Copula connecting function to set up a distribution model based on structure correlation, failure correlation and target correlation, and then acquiring reliability target area of all subsystems by Matlab. In this method, not only the traditional distribution considerations are concerned, but also correlation influences are involved, to achieve supplementing information and optimizing distribution.

  4. A Bayesian Framework for Reliability Analysis of Spacecraft Deployments

    Evans, John W.; Gallo, Luis; Kaminsky, Mark


    Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.

  5. Reliability and risk analysis using artificial neural networks

    Robinson, D.G. [Sandia National Labs., Albuquerque, NM (United States)


    This paper discusses preliminary research at Sandia National Laboratories into the application of artificial neural networks for reliability and risk analysis. The goal of this effort is to develop a reliability based methodology that captures the complex relationship between uncertainty in material properties and manufacturing processes and the resulting uncertainty in life prediction estimates. The inputs to the neural network model are probability density functions describing system characteristics and the output is a statistical description of system performance. The most recent application of this methodology involves the comparison of various low-residue, lead-free soldering processes with the desire to minimize the associated waste streams with no reduction in product reliability. Model inputs include statistical descriptions of various material properties such as the coefficients of thermal expansion of solder and substrate. Consideration is also given to stochastic variation in the operational environment to which the electronic components might be exposed. Model output includes a probabilistic characterization of the fatigue life of the surface mounted component.


    G. W. Parry; J.A Forester; V.N. Dang; S. M. L. Hendrickson; M. Presley; E. Lois; J. Xing


    This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure event (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.

  7. Simulation Approach to Mission Risk and Reliability Analysis Project

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  8. Reliability analysis of ship structure system with multi-defects


    This paper analyzes the influence of multi-defects including the initial distortions,welding residual stresses,cracks and local dents on the ultimate strength of the plate element,and has worked out expressions of reliability calculation and sensitivity analysis of the plate element.Reliability analysis is made for the system with multi-defects plate elements.Failure mechanism,failure paths and the calculating approach to global reliability index are also worked out.After plate elements with multi-defects fail,the formula of reverse node forces which affect the residual structure is deduced,so are the sensitivity expressions of the system reliability index.This ensures calculating accuracy and rationality for reliability analysis,and makes it convenient to find weakness plate elements which affect the reliability of the structure system.Finally,for the validity of the approach proposed,we take the numerical example of a ship cabin to compare and contrast the reliability and the sensitivity analysis of the structure system with multi-defects with those of the structure system with no defects.The approach has implications for the structure design,rational maintenance and renewing strategy.

  9. Tailoring a Human Reliability Analysis to Your Industry Needs

    DeMott, D. L.


    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed

  10. Requalification of offshore structures. Reliability analysis of platform

    Bloch, A.; Dalsgaard Soerensen, J. [Aalborg Univ. (Denmark)


    A preliminary reliability analysis has been performed for an example platform. In order to model the structural response such that it is possible to calculate reliability indices, approximate quadratic response surfaces have been determined for cross-sectional forces. Based on a deterministic, code-based analysis the elements and joints which can be expected to be the most critical are selected and response surfaces are established for the cross-sectional forces in those. A stochastic model is established for the uncertain variables. The reliability analysis shows that with this stochastic model the smallest reliability indices for elements are about 3.9. The reliability index for collapse (pushover) is estimated to 6.7 and the reliability index for fatigue failure using a crude model is for the expected most critical detail estimated to 3.2, corresponding to the accumulated damage during the design lifetime of the platform. These reliability indices are considered to be reasonable compared with values recommended by e.g. ISO. The most important stochastic variables are found to be the wave height and the drag coefficient (including the model uncertainty related to estimation of wave forces on the platform). (au)

  11. Reliability analysis of the bulk cargo loading system including dependent components

    Blokus-Roszkowska, Agnieszka


    In the paper an innovative approach to the reliability analysis of multistate series-parallel systems assuming their components' dependency is presented. The reliability function of a multistate series system with components dependent according to the local load sharing rule is determined. Linking these results for series systems with results for parallel systems with independent components, we obtain the reliability function of a multistate series-parallel system assuming dependence of components' departures from the reliability states subsets in series subsystem and assuming independence between these subsystems. As a particular case, the reliability function of a multistate series-parallel system composed of dependent components having exponential reliability functions is fixed. Theoretical results are applied practically to the reliability evaluation of a bulk cargo transportation system, which main area is to load bulk cargo on board the ships. The reliability function and other reliability characteristics of the loading system are determined in case its components have exponential reliability functions with interdependent departures rates from the subsets of their reliability states. Finally, the obtained results are compared with results for the bulk cargo transportation system composed of independent components.

  12. Maritime shipping as a high reliability industry: A qualitative analysis

    Mannarelli, T.; Roberts, K.; Bea, R.


    The maritime oil shipping industry has great public demands for safe and reliable organizational performance. Researchers have identified a set of organizations and industries that operate at extremely high levels of reliability, and have labelled them High Reliability Organizations (HRO). Following the Exxon Valdez oil spill disaster of 1989, public demands for HRO-level operations were placed on the oil industry. It will be demonstrated that, despite enormous improvements in safety and reliability, maritime shipping is not operating as an HRO industry. An analysis of the organizational, environmental, and cultural history of the oil industry will help to provide justification and explanation. The oil industry will be contrasted with other HRO industries and the differences will inform the shortfalls maritime shipping experiences with regard to maximizing reliability. Finally, possible solutions for the achievement of HRO status will be offered.

  13. Reliability Analysis of OMEGA Network and Its Variants

    Suman Lata


    Full Text Available The performance of a computer system depends directly on the time required to perform a basic operation and the number of these basic operations that can be performed concurrently. High performance computing systems can be designed using parallel processing. Parallel processing is achieved by using more than one processors or computers together they communicate with each other to solve a givenproblem. MINs provide better way for the communication between different processors or memory modules with less complexity, fast communication, good fault tolerance, high reliability and low cost. Reliability of a system is the probability that it will successfully perform its intended operations for a given time under stated operating conditions. From the reliability analysis it has beenobserved that addition of one stage to Omega networks provide higher reliability in terms of terminal reliability than the addition of two stages in the corresponding network.

  14. Discrete event simulation versus conventional system reliability analysis approaches

    Kozine, Igor


    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...

  15. Fatigue Reliability Analysis of Wind Turbine Cast Components

    Hesam Mirzaei Rafsanjani


    Full Text Available The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability to be used for decision-making if additional cost considerations are added. In this paper, a statistical approach is presented based on statistical hypothesis testing and analysis of covariance (ANCOVA which can be applied to compare different groups (manufacturers, suppliers, test facilities, etc. and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress for fatigue assessment are estimated based on the statistical analyses and by introduction of physical, model and statistical uncertainties used for the illustration of reliability assessment.

  16. Reliability Analysis of Bearing Capacity of Large-Diameter Piles under Osterberg Test

    Lei Nie


    Full Text Available This study gives the reliability analysis of bearing capacity of large-diameter piles under osterberg test. The limit state equation of dimensionless random variables is utilized in the reliability analysis of vertical bearing capacity of large-diameter piles based on Osterberg loading tests. And the reliability index and the resistance partial coefficient under the current specifications are calculated using calibration method. The results show: the reliable index of large-diameter piles is correlated with the load effect ratio and is smaller than the ordinary piles; resistance partial coefficient of 1.53 is proper in design of large-diameter piles.

  17. Reliability Analysis of Dynamic Stability in Waves

    Søborg, Anders Veldt


    exhibit sufficient characteristics with respect to slope at zero heel (GM value), maximum leverarm, positive range of stability and area below the leverarm curve. The rule-based requirements to calm water leverarm curves are entirely based on experience obtained from vessels in operation and recorded......-4 per ship year such brute force Monte-Carlo simulations are not always feasible due to the required computational resources. Previous studies of dynamic stability of ships in waves typically focused on the capsizing event. In this study the objective is to establish a procedure that can identify...... the distribution of the exceedance probability may be established by an estimation of the out-crossing rate of the "safe set" defined by the utility function. This out-crossing rate will be established using the so-called Madsen's Formula. A bi-product of this analysis is a set of short wave time series...

  18. Reliability Analysis for Tunnel Supports System by Using Finite Element Method

    E. Bukaçi


    Full Text Available Reliability analysis is a method that can be used in almost any geotechnical engineering problem. Using this method requires the knowledge of parameter uncertainties, which can be expressed by their standard deviation value. By performing reliability analysis to tunnel supports design, can be obtained a range of safety factors and by using them, probability of failure can be calculated. Problem becomes more complex when this analysis is performed for numerical methods, such as Finite Element Method. This paper gives a solution to how reliability analysis can be performed to design tunnel supports, by using Point Estimate Method to calculate reliability index. As a case study, is chosen one of the energy tunnels at Fan Hydropower plant, in Rrëshen Albania. As results, values of factor of safety and probability of failure are calculated. Also some suggestions using reliability analysis with numerical methods are given.

  19. Reliability and Sensitivity Analysis of Cast Iron Water Pipes for Agricultural Food Irrigation

    Yanling Ni


    Full Text Available This study aims to investigate the reliability and sensitivity of cast iron water pipes for agricultural food irrigation. The Monte Carlo simulation method is used for fracture assessment and reliability analysis of cast iron pipes for agricultural food irrigation. Fracture toughness is considered as a limit state function for corrosion affected cast iron pipes. Then the influence of failure mode on the probability of pipe failure has been discussed. Sensitivity analysis also is carried out to show the effect of changing basic parameters on the reliability and life time of the pipe. The analysis results show that the applied methodology can consider different random variables for estimating of life time of the pipe and it can also provide scientific guidance for rehabilitation and maintenance plans for agricultural food irrigation. In addition, the results of the failure and reliability analysis in this study can be useful for designing of more reliable new pipeline systems for agricultural food irrigation.

  20. Analysis on Operation Reliability of Generating Units in 2009



    This paper presents the data on operation reliability indices and relevant analyses toward China's conventional power generating units in 2009. The units brought into the statistical analysis include 100-MW or above thermal generating units, 40-MW or above hydro generating units, and all nuclear generating units. The reliability indices embodied include utilization hours, times and hours of scheduled outages, times and hours of unscheduled outages, equivalent forced outage rate and equivalent availability factor.


    WANG Yan-ping; L(U) Zhen-zhou; YUE Zhu-feng


    In order to obtain the failure probability of the implicit limit state equation accurately, advanced mean value second order (AMVSO) method was presented, and advanced mean value (AMV) in conjunction with the response surface method (RSM)was also presented. The implementations were constructed on the basis of the advanced mean value first order (AMVFO) method and the RSM. The examples show that the accuracy of the AMVSO is higher than that of the AMVFO. The results of the AMV in conjunction with the RSM are not sensitive to the positions of the sampling points for determining the response surface equation, which illustrates the robustness of the presented method.

  2. Reliability Analysis and Modeling of ZigBee Networks

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to

  3. Reliability analysis and initial requirements for FC systems and stacks

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  4. Coverage Modeling and Reliability Analysis Using Multi-state Function


    Fault tree analysis is an effective method for predicting the reliability of a system. It gives a pictorial representation and logical framework for analyzing the reliability. Also, it has been used for a long time as an effective method for the quantitative and qualitative analysis of the failure modes of critical systems. In this paper, we propose a new general coverage model (GCM) based on hardware independent faults. Using this model, an effective software tool can be constructed to detect, locate and recover fault from the faulty system. This model can be applied to identify the key component that can cause the failure of the system using failure mode effect analysis (FMEA).

  5. Reliability Analysis of Structural Timber Systems

    Sørensen, John Dalsgaard; Hoffmeyer, P.


    characteristics of the load-bearing capacity is estimated in the form of a characteristic value and a coefficient of variation. These two values are of primary importance for codes of practice based on the partial safety factor format since the partial safety factor is closely related to the coefficient...... the above stochastic models, statistical characteristics (distribution function, 5% quantile and coefficient of variation) are determined. Generally, the results show that taking the system effects into account the characteristic load bearing capacity can be increased and the partial safety factor decreased...... of variation. In the paper a stochastic model is described for the strength of a single piece of timber taking into account the stochastic variation of the strength and stiffness with length. Also stochastic models for different types of loads are formulated. First, simple representative systems with different...

  6. Wind turbine reliability : a database and analysis approach.

    Linsday, James (ARES Corporation); Briand, Daniel; Hill, Roger Ray; Stinebaugh, Jennifer A.; Benjamin, Allan S. (ARES Corporation)


    The US wind Industry has experienced remarkable growth since the turn of the century. At the same time, the physical size and electrical generation capabilities of wind turbines has also experienced remarkable growth. As the market continues to expand, and as wind generation continues to gain a significant share of the generation portfolio, the reliability of wind turbine technology becomes increasingly important. This report addresses how operations and maintenance costs are related to unreliability - that is the failures experienced by systems and components. Reliability tools are demonstrated, data needed to understand and catalog failure events is described, and practical wind turbine reliability models are illustrated, including preliminary results. This report also presents a continuing process of how to proceed with controlling industry requirements, needs, and expectations related to Reliability, Availability, Maintainability, and Safety. A simply stated goal of this process is to better understand and to improve the operable reliability of wind turbine installations.

  7. Reliability analysis of flood defence systems in the Netherlands

    Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for reliability analysis of dike systems has been under de-velopment in the Netherlands. This paper describes the global data requirements for application and the set-up of the models in the Netherlands. The analysis generates an estimate of the probability of sys




    Full Text Available The introduction of pervasive devices and mobile devices has led to immense growth of real time distributed processing. In such context reliability of the computing environment is very important. Reliability is the probability that the devices, links, processes, programs and files work efficiently for the specified period of time and in the specified condition. Distributed systems are available as conventional ring networks, clusters and agent based systems. Reliability of such systems is focused. These networks are heterogeneous and scalable in nature. There are several factors, which are to be considered for reliability estimation. These include the application related factors like algorithms, data-set sizes, memory usage pattern, input-output, communication patterns, task granularity and load-balancing. It also includes the hardware related factors like processor architecture, memory hierarchy, input-output configuration and network. The software related factors concerning reliability are operating systems, compiler, communication protocols, libraries and preprocessor performance. In estimating the reliability of a system, the performance estimation is an important aspect. Reliability analysis is approached using probability.

  9. A survey on reliability and safety analysis techniques of robot systems in nuclear power plants

    Eom, H.S.; Kim, J.H.; Lee, J.C.; Choi, Y.R.; Moon, S.S


    The reliability and safety analysis techniques was surveyed for the purpose of overall quality improvement of reactor inspection system which is under development in our current project. The contents of this report are : 1. Reliability and safety analysis techniques suvey - Reviewed reliability and safety analysis techniques are generally accepted techniques in many industries including nuclear industry. And we selected a few techniques which are suitable for our robot system. They are falut tree analysis, failure mode and effect analysis, reliability block diagram, markov model, combinational method, and simulation method. 2. Survey on the characteristics of robot systems which are distinguished from other systems and which are important to the analysis. 3. Survey on the nuclear environmental factors which affect the reliability and safety analysis of robot system 4. Collection of the case studies of robot reliability and safety analysis which are performed in foreign countries. The analysis results of this survey will be applied to the improvement of reliability and safety of our robot system and also will be used for the formal qualification and certification of our reactor inspection system.

  10. The development of a reliable amateur boxing performance analysis template.

    Thomson, Edward; Lamb, Kevin; Nicholas, Ceri


    The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.

  11. Test results of reliable and very high capillary multi-evaporators / condenser loop

    Van Oost, S.; Dubois, M.; Bekaert, G. [Societe Anonyme Belge de Construction Aeronautique - SABCA (Belgium)


    The paper present the results of various SABCA activities in the field of two-phase heat transport system. These results have been based on a critical review and analysis of the existing two-phase loop and of the future loop needs in space applications. The research and the development of a high capillary wick (capillary pressure up to 38 000 Pa) are described. These activities have led towards the development of a reliable high performance capillary loop concept (HPCPL), which is discussed in details. Several loop configurations mono/multi-evaporators have been ground tested. The presented results of various tests clearly show the viability of this concept for future applications. Proposed flight demonstrations as well as potential applications conclude this paper. (authors) 7 refs.

  12. Reliability analysis of cluster-based ad-hoc networks

    Cook, Jason L. [Quality Engineering and System Assurance, Armament Research Development Engineering Center, Picatinny Arsenal, NJ (United States); Ramirez-Marquez, Jose Emmanuel [School of Systems and Enterprises, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States)], E-mail:


    The mobile ad-hoc wireless network (MAWN) is a new and emerging network scheme that is being employed in a variety of applications. The MAWN varies from traditional networks because it is a self-forming and dynamic network. The MAWN is free of infrastructure and, as such, only the mobile nodes comprise the network. Pairs of nodes communicate either directly or through other nodes. To do so, each node acts, in turn, as a source, destination, and relay of messages. The virtue of a MAWN is the flexibility this provides; however, the challenge for reliability analyses is also brought about by this unique feature. The variability and volatility of the MAWN configuration makes typical reliability methods (e.g. reliability block diagram) inappropriate because no single structure or configuration represents all manifestations of a MAWN. For this reason, new methods are being developed to analyze the reliability of this new networking technology. New published methods adapt to this feature by treating the configuration probabilistically or by inclusion of embedded mobility models. This paper joins both methods together and expands upon these works by modifying the problem formulation to address the reliability analysis of a cluster-based MAWN. The cluster-based MAWN is deployed in applications with constraints on networking resources such as bandwidth and energy. This paper presents the problem's formulation, a discussion of applicable reliability metrics for the MAWN, and illustration of a Monte Carlo simulation method through the analysis of several example networks.

  13. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Jin Zhu


    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  14. New Mathematical Derivations Applicable to Safety and Reliability Analysis

    Cooper, J.A.; Ferson, S.


    Boolean logic expressions are often derived in safety and reliability analysis. Since the values of the operands are rarely exact, accounting for uncertainty with the tightest justifiable bounds is important. Accurate determination of result bounds is difficult when the inputs have constraints. One example of a constraint is that an uncertain variable that appears multiple times in a Boolean expression must always have the same value, although the value cannot be exactly specified. A solution for this repeated variable problem is demonstrated for two Boolean classes. The classes, termed functions with unate variables (including, but not limited to unate functions), and exclusive-or functions, frequently appear in Boolean equations for uncertain outcomes portrayed by logic trees (event trees and fault trees).

  15. Time-dependent reliability analysis and condition assessment of structures

    Ellingwood, B.R. [Johns Hopkins Univ., Baltimore, MD (United States)


    Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process.

  16. Reliability analysis for the quench detection in the LHC machine

    Denz, R; Vergara-Fernández, A


    The Large Hadron Collider (LHC) will incorporate a large amount of superconducting elements that require protection in case of a quench. Key elements in the quench protection system are the electronic quench detectors. Their reliability will have an important impact on the down time as well as on the operational cost of the collider. The expected rates of both false and missed quenches have been computed for several redundant detection schemes. The developed model takes account of the maintainability of the system to optimise the frequency of foreseen checks, and evaluate their influence on the performance of different detection topologies. Seen the uncertainty of the failure rate of the components combined with the LHC tunnel environment, the study has been completed with a sensitivity analysis of the results. The chosen detection scheme and the maintainability strategy for each detector family are given.

  17. System Reliability Analysis of Redundant Condition Monitoring Systems

    YI Pengxing; HU Youming; YANG Shuzi; WU Bo; CUI Feng


    The development and application of new reliability models and methods are presented to analyze the system reliability of complex condition monitoring systems. The methods include a method analyzing failure modes of a type of redundant condition monitoring systems (RCMS) by invoking failure tree model, Markov modeling techniques for analyzing system reliability of RCMS, and methods for estimating Markov model parameters. Furthermore, a computing case is investigated and many conclusions upon this case are summarized. Results show that the method proposed here is practical and valuable for designing condition monitoring systems and their maintenance.

  18. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William


    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

  19. Distribution System Reliability Analysis for Smart Grid Applications

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  20. Reliability Analysis of a Green Roof Under Different Storm Scenarios

    William, R. K.; Stillwell, A. S.


    Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.

  1. Semigroup Method for a Mathematical Model in Reliability Analysis

    Geni Gupur; LI Xue-zhi


    The system which consists of a reliable machine, an unreliable machine and a storage buffer with infinite many workpieces has been studied. The existence of a unique positive time-dependent solution of the model corresponding to the system has been obtained by using C0-semigroup theory of linear operators in functional analysis.

  2. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard


    . A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of the remaining elements after the removal of selected critical elements. The robustness...

  3. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard;


    This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system. A comp...

  4. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.


    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  5. Test-retest reliability of trunk accelerometric gait analysis

    Henriksen, Marius; Lund, Hans; Moe-Nilssen, R


    The purpose of this study was to determine the test-retest reliability of a trunk accelerometric gait analysis in healthy subjects. Accelerations were measured during walking using a triaxial accelerometer mounted on the lumbar spine of the subjects. Six men and 14 women (mean age 35.2; range 18...

  6. Reliability analysis of repairable systems using system dynamics modeling and simulation

    Srinivasa Rao, M.; Naikan, V. N. A.


    Repairable standby system's study and analysis is an important topic in reliability. Analytical techniques become very complicated and unrealistic especially for modern complex systems. There have been attempts in the literature to evolve more realistic techniques using simulation approach for reliability analysis of systems. This paper proposes a hybrid approach called as Markov system dynamics (MSD) approach which combines the Markov approach with system dynamics simulation approach for reliability analysis and to study the dynamic behavior of systems. This approach will have the advantages of both Markov as well as system dynamics methodologies. The proposed framework is illustrated for a standby system with repair. The results of the simulation when compared with that obtained by traditional Markov analysis clearly validate the MSD approach as an alternative approach for reliability analysis.

  7. Human Reliability Analysis for Digital Human-Machine Interfaces

    Ronald L. Boring


    This paper addresses the fact that existing human reliability analysis (HRA) methods do not provide guidance on digital human-machine interfaces (HMIs). Digital HMIs are becoming ubiquitous in nuclear power operations, whether through control room modernization or new-build control rooms. Legacy analog technologies like instrumentation and control (I&C) systems are costly to support, and vendors no longer develop or support analog technology, which is considered technologically obsolete. Yet, despite the inevitability of digital HMI, no current HRA method provides guidance on how to treat human reliability considerations for digital technologies.

  8. Modelling application for cognitive reliability and error analysis method

    Fabio De Felice


    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  9. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    Nikulin, M; Mesbah, M; Limnios, N


    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  10. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Swain, A.D.


    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  11. Strength Reliability Analysis of Turbine Blade Using Surrogate Models

    Wei Duan


    Full Text Available There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element Method (FEM, a surrogate model and Monte Carlo Simulation (MCS, is applied to solve the blade reliability analysis. Based on the blade finite element parametrical model and the experimental design, two kinds of surrogate models, Polynomial Response Surface (PRS and Artificial Neural Network (ANN, are applied to construct the approximation analytical expressions between the blade responses (including maximum stress and deflection and random input variables, which act as a surrogate of finite element solver to drastically reduce the number of simulations required. Then the surrogate is used for most of the samples needed in the Monte Carlo method and the statistical parameters and cumulative distribution functions of the maximum stress and deflection are obtained by Monte Carlo simulation. Finally, the probabilistic sensitivities analysis, which combines the magnitude of the gradient and the width of the scatter range of the random input variables, is applied to evaluate how much the maximum stress and deflection of the blade are influenced by the random nature of input parameters.

  12. Identifying Sources of Difference in Reliability in Content Analysis

    Elizabeth Murphy


    Full Text Available This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD. Transcripts of 10 students in a month-long online asynchronous discussion were coded by two coders using an instrument with two categories, five processes, and 19 indicators of Problem Formulation and Resolution (PFR. Sources of difference were identified in relation to: coders; tasks; and students. Reliability values were calculated at the levels of categories, processes, and indicators. At the most detailed level of coding on the basis of the indicator, findings revealed that the overall level of reliability between coders was .591 when measured with Cohen’s kappa. The difference between tasks at the same level ranged from .349 to .664, and the difference between participants ranged from .390 to .907. Implications for training and research are discussed.

  13. Reliability Analysis of Free Jet Scour Below Dams

    Chuanqi Li


    Full Text Available Current formulas for calculating scour depth below of a free over fall are mostly deterministic in nature and do not adequately consider the uncertainties of various scouring parameters. A reliability-based assessment of scour, taking into account uncertainties of parameters and coefficients involved, should be performed. This paper studies the reliability of a dam foundation under the threat of scour. A model for calculating the reliability of scour and estimating the probability of failure of the dam foundation subjected to scour is presented. The Maximum Entropy Method is applied to construct the probability density function (PDF of the performance function subject to the moment constraints. Monte Carlo simulation (MCS is applied for uncertainty analysis. An example is considered, and there liability of its scour is computed, the influence of various random variables on the probability failure is analyzed.

  14. Modeling and Analysis of Component Faults and Reliability

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter;


    that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates.......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...

  15. Reliability analysis of two unit parallel repairable industrial system

    Mohit Kumar Kakkar


    Full Text Available The aim of this work is to present a reliability and profit analysis of a two-dissimilar parallel unit system under the assumption that operative unit cannot fail after post repair inspection and replacement and there is only one repair facility. Failure and repair times of each unit are assumed to be uncorrelated. Using regenerative point technique various reliability characteristics are obtained which are useful to system designers and industrial managers. Graphical behaviors of mean time to system failure (MTSF and profit function have also been studied. In this paper, some important measures of reliability characteristics of a two non-identical unit standby system model with repair, inspection and post repair are obtained using regenerative point technique.

  16. Analysis of the Reliability of the "Alternator- Alternator Belt" System

    Ivan Mavrin


    Full Text Available Before starting and also during the exploitation of va1ioussystems, it is vety imp011ant to know how the system and itsparts will behave during operation regarding breakdowns, i.e.failures. It is possible to predict the service behaviour of a systemby determining the functions of reliability, as well as frequencyand intensity of failures.The paper considers the theoretical basics of the functionsof reliability, frequency and intensity of failures for the twomain approaches. One includes 6 equal intetvals and the other13 unequal intetvals for the concrete case taken from practice.The reliability of the "alternator- alternator belt" system installedin the buses, has been analysed, according to the empiricaldata on failures.The empitical data on failures provide empirical functionsof reliability and frequency and intensity of failures, that arepresented in tables and graphically. The first analysis perfO!med by dividing the mean time between failures into 6 equaltime intervals has given the forms of empirical functions of fa ilurefrequency and intensity that approximately cotTespond totypical functions. By dividing the failure phase into 13 unequalintetvals with two failures in each interval, these functions indicateexplicit transitions from early failure inte1val into the randomfailure interval, i.e. into the ageing intetval. Functions thusobtained are more accurate and represent a better solution forthe given case.In order to estimate reliability of these systems with greateraccuracy, a greater number of failures needs to be analysed.

  17. Reliability and maintainability analysis of electrical system of drum shearers

    SEYED Hadi Hoseinie; MOHAMMAD Ataei; REZA Khalokakaie; UDAY Kumar


    The reliability and maintainability of electrical system of drum shearer at Parvade.l Coal Mine in central Iran was analyzed. The maintenance and failure data were collected during 19 months of shearer operation. According to trend and serial correlation tests, the data were independent and identically distributed (iid) and therefore the statistical techniques were used for modeling. The data analysis show that the time between failures (TBF) and time to repair (TTR) data obey the lognormal and Weibull 3 parameters distribution respectively. Reliability-based preventive maintenance time intervals for electrical system of the drum shearer were calculated with regard to reliability plot. The reliability-based maintenance intervals for 90%, 80%, 70% and 50% reliability level are respectively 9.91, 17.96, 27.56 and 56.1 h. Also the calculations show that time to repair (TTR) of this system varies in range 0.17-4 h with 1.002 h as mean time to repair (MTTR). There is a 80% chance that the electrical system of shearer of Parvade.l mine repair will be accomplished within 1.45 h.

  18. Semantic Web for Reliable Citation Analysis in Scholarly Publishing

    Ruben Tous


    Full Text Available Analysis of the impact of scholarly artifacts is constrained by current unreliable practices in cross-referencing, citation discovering, and citation indexing and analysis, which have not kept pace with the technological advances that are occurring in several areas like knowledge management and security. Because citation analysis has become the primary component in scholarly impact factor calculation, and considering the relevance of this metric within both the scholarly publishing value chain and (especially important the professional curriculum evaluation of scholarly professionals, we defend that current practices need to be revised. This paper describes a reference architecture that aims to provide openness and reliability to the citation-tracking lifecycle. The solution relies on the use of digitally signed semantic metadata in the different stages of the scholarly publishing workflow in such a manner that authors, publishers, repositories, and citation-analysis systems will have access to independent reliable evidences that are resistant to forgery, impersonation, and repudiation. As far as we know, this is the first paper to combine Semantic Web technologies and public-key cryptography to achieve reliable citation analysis in scholarly publishing

  19. Reliability test and failure analysis of high power LED packages*

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng


    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 ℃/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing.

  20. Reliable Classification of Geologic Surfaces Using Texture Analysis

    Foil, G.; Howarth, D.; Abbey, W. J.; Bekker, D. L.; Castano, R.; Thompson, D. R.; Wagstaff, K.


    Communication delays and bandwidth constraints are major obstacles for remote exploration spacecraft. Due to such restrictions, spacecraft could make use of onboard science data analysis to maximize scientific gain, through capabilities such as the generation of bandwidth-efficient representative maps of scenes, autonomous instrument targeting to exploit targets of opportunity between communications, and downlink prioritization to ensure fast delivery of tactically-important data. Of particular importance to remote exploration is the precision of such methods and their ability to reliably reproduce consistent results in novel environments. Spacecraft resources are highly oversubscribed, so any onboard data analysis must provide a high degree of confidence in its assessment. The TextureCam project is constructing a "smart camera" that can analyze surface images to autonomously identify scientifically interesting targets and direct narrow field-of-view instruments. The TextureCam instrument incorporates onboard scene interpretation and mapping to assist these autonomous science activities. Computer vision algorithms map scenes such as those encountered during rover traverses. The approach, based on a machine learning strategy, trains a statistical model to recognize different geologic surface types and then classifies every pixel in a new scene according to these categories. We describe three methods for increasing the precision of the TextureCam instrument. The first uses ancillary data to segment challenging scenes into smaller regions having homogeneous properties. These subproblems are individually easier to solve, preventing uncertainty in one region from contaminating those that can be confidently classified. The second involves a Bayesian approach that maximizes the likelihood of correct classifications by abstaining from ambiguous ones. We evaluate these two techniques on a set of images acquired during field expeditions in the Mojave Desert. Finally, the

  1. Fatigue damage reliability analysis for Nanjing Yangtze river bridge using structural health monitoring data

    HE Xu-hui; CHEN Zheng-qing; YU Zhi-wu; HUANG Fang-lin


    To evaluate the fatigue damage reliability of critical members of the Nanjing Yangtze river bridge, according to the stress-number curve and Miner's rule, the corresponding expressions for calculating the structural fatigue damage reliability were derived. Fatigue damage reliability analysis of some critical members of the Nanjing Yangtze river bridge was carried out by using the strain-time histories measured by the structural health monitoring system of the bridge. The corresponding stress spectra were obtained by the real-time rain-flow counting method.Results of fatigue damage were calculated respectively by the reliability method at different reliability and compared with Miner's rule. The results show that the fatigue damage of critical members of the Nanjing Yangtze river bridge is very small due to its low live-load stress level.

  2. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.


    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  3. Windfarm Generation Assessment for Reliability Analysis of Power Systems

    Barberis Negra, Nicola; Bak-Jensen, Birgitte; Holmstrøm, O.


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  4. Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems

    Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  5. Fatigue Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune


    In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed thro...... of the natural period, damping ratio, current, stress spectrum and parameters describing the fatigue strength. Further, soil damping is shown to be significant for the Mono-tower.......In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed...

  6. Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm

    Raj Kumar


    Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.

  7. Reliability analysis method applied in slope stability: slope prediction and forecast on stability analysis

    Wenjuan ZHANG; Li CHEN; Ning QU; Hai'an LIANG


    Landslide is one kind of geologic hazards that often happens all over the world. It brings huge losses to human life and property; therefore, it is very important to research it. This study focused in combination between single and regional landslide, traditional slope stability analysis method and reliability analysis method. Meanwhile, methods of prediction of slopes and reliability analysis were discussed.

  8. Reliability analysis based on the losses from failures.

    Todinov, M T


    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  9. A Sensitivity Analysis on Component Reliability from Fatigue Life Computations


    AD-A247 430 MTL TR 92-5 AD A SENSITIVITY ANALYSIS ON COMPONENT RELIABILITY FROM FATIGUE LIFE COMPUTATIONS DONALD M. NEAL, WILLIAM T. MATTHEWS, MARK G...HAGI OR GHANI NUMBI:H(s) Donald M. Neal, William T. Matthews, Mark G. Vangel, and Trevor Rudalevige 9. PERFORMING ORGANIZATION NAME AND ADDRESS lU...Technical Information Center, Cameron Station, Building 5, 5010 Duke Street, Alexandria, VA 22304-6145 2 ATTN: DTIC-FDAC I MIAC/ CINDAS , Purdue


    彭世济; 卢明银; 张达贤


    It is stipulated in the China national document, named"The Economical Appraisal Methods for Construction Projects" that dynamic analysis should dominate the project economical appraisal methods.This paper has set up a dynamic investment forecast model for Yuanbaoshan Surface Coal Mine. Based on this model, the investment reliability using simulation and analytic methods has been analysed, anti the probability that the designed internal rate of return can reach 8.4%, from economic points of view, have been also studied.

  11. Reliability analysis of the control system of large-scale vertical mixing equipment


    The control system of vertical mixing equipment is a concentrate distributed monitoring system (CDMS).A reliability analysis model was built and its analysis was conducted based on reliability modeling theories such as the graph theory,Markov process,and redundancy theory.Analysis and operational results show that the control system can meet all technical requirements for high energy composite solid propellant manufacturing.The reliability performance of the control system can be considerably improved by adopting a control strategy combined with the hot spared redundancy of the primary system and the cold spared redundancy of the emergent one.The reliability performance of the control system can be also improved by adopting the redundancy strategy or improving the quality of each component and cable of the system.

  12. Analysis and Reliability Performance Comparison of Different Facial Image Features

    J. Madhavan


    Full Text Available This study performs reliability analysis on the different facial features with weighted retrieval accuracy on increasing facial database images. There are many methods analyzed in the existing papers with constant facial databases mentioned in the literature review. There were not much work carried out to study the performance in terms of reliability and also how the method will perform on increasing the size of the database. In this study certain feature extraction methods were analyzed on the regular performance measure and also the performance measures are modified to fit the real time requirements by giving weight ages for the closer matches. In this study four facial feature extraction methods are performed, they are DWT with PCA, LWT with PCA, HMM with SVD and Gabor wavelet with HMM. Reliability of these methods are analyzed and reported. Among all these methods Gabor wavelet with HMM gives more reliability than other three methods performed. Experiments are carried out to evaluate the proposed approach on the Olivetti Research Laboratory (ORL face database.




    RHIC has been successfully operated for 5 years as a collider for different species, ranging from heavy ions including gold and copper, to polarized protons. We present a critical analysis of reliability data for RHIC that not only identifies the principal factors limiting availability but also evaluates critical choices at design times and assess their impact on present machine performance. RHIC availability data are typical when compared to similar high-energy colliders. The critical analysis of operations data is the basis for studies and plans to improve RHIC machine availability beyond the 50-60% typical of high-energy colliders.

  14. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    Taheriyoun, Masoud; Moradinejad, Saber


    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  15. Reliability of Foundation Pile Based on Settlement and a Parameter Sensitivity Analysis

    Shujun Zhang; Luo Zhong; Zhijun Xu


    Based on the uncertainty analysis to calculation model of settlement, the formula of reliability index of foundation pile is derived. Based on this formula, the influence of coefficient of variation of the calculated settlement at pile head, coefficient of variation of the permissible limit of the settlement, coefficient of variation of the measured settlement, safety coefficient, and the mean value of calculation model coefficient on reliability is analyzed. The results indicate that (1) hig...

  16. Mutation Analysis Approach to Develop Reliable Object-Oriented Software

    Monalisa Sarma


    Full Text Available In general, modern programs are large and complex and it is essential that they should be highly reliable in applications. In order to develop highly reliable software, Java programming language developer provides a rich set of exceptions and exception handling mechanisms. Exception handling mechanisms are intended to help developers build robust programs. Given a program with exception handling constructs, for an effective testing, we are to detect whether all possible exceptions are raised and caught or not. However, complex exception handling constructs make it tedious to trace which exceptions are handled and where and which exceptions are passed on. In this paper, we address this problem and propose a mutation analysis approach to develop reliable object-oriented programs. We have applied a number of mutation operators to create a large set of mutant programs with different type of faults. We then generate test cases and test data to uncover exception related faults. The test suite so obtained is applied to the mutant programs measuring the mutation score and hence verifying whether mutant programs are effective or not. We have tested our approach with a number of case studies to substantiate the efficacy of the proposed mutation analysis technique.

  17. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    XI Jia-mi; YANG Geng-she


    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  18. Reliability Analysis of Penetration Systems Using Nondeterministic Methods



    Device penetration into media such as metal and soil is an application of some engineering interest. Often, these devices contain internal components and it is of paramount importance that all significant components survive the severe environment that accompanies the penetration event. In addition, the system must be robust to perturbations in its operating environment, some of which exhibit behavior which can only be quantified to within some level of uncertainty. In the analysis discussed herein, methods to address the reliability of internal components for a specific application system are discussed. The shock response spectrum (SRS) is utilized in conjunction with the Advanced Mean Value (AMV) and Response Surface methods to make probabilistic statements regarding the predicted reliability of internal components. Monte Carlo simulation methods are also explored.

  19. Issues in benchmarking human reliability analysis methods : a literature review.

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)


    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  20. Methodological Approach for Performing Human Reliability and Error Analysis in Railway Transportation System

    Fabio De Felice


    Full Text Available Today, billions of dollars are being spent annually world wide to develop, manufacture, and operate transportation system such trains, ships, aircraft, and motor vehicles. Around 70 to 90 percent oftransportation crashes are, directly or indirectly, the result of human error. In fact, with the development of technology, system reliability has increased dramatically during the past decades, while human reliability has remained unchanged over the same period. Accordingly, human error is now considered as the most significant source of accidents or incidents in safety-critical systems. The aim of the paper is the proposal of a methodological approach to improve the transportation system reliability and in particular railway transportation system. The methodology presented is based on Failure Modes, Effects and Criticality Analysis (FMECA and Human Reliability Analysis (HRA.

  1. A Reliability-Based Analysis of Bicyclist Red-Light Running Behavior at Urban Intersections

    Mei Huan


    Full Text Available This paper describes the red-light running behavior of bicyclists at urban intersections based on reliability analysis approach. Bicyclists’ crossing behavior was collected by video recording. Four proportional hazard models by the Cox, exponential, Weibull, and Gompertz distributions were proposed to analyze the covariate effects on safety crossing reliability. The influential variables include personal characteristics, movement information, and situation factors. The results indicate that the Cox hazard model gives the best description of bicyclists’ red-light running behavior. Bicyclists’ safety crossing reliabilities decrease as their waiting times increase. There are about 15.5% of bicyclists with negligible waiting times, who are at high risk of red-light running and very low safety crossing reliabilities. The proposed reliability models can capture the covariates’ effects on bicyclists’ crossing behavior at signalized intersections. Both personal characteristics and traffic conditions have significant effects on bicyclists’ safety crossing reliability. A bicyclist is more likely to have low safety crossing reliability and high violation risk when more riders are crossing against the red light, and they wait closer to the motorized lane. These findings provide valuable insights in understanding bicyclists’ violation behavior; and their implications in assessing bicyclists’ safety crossing reliability were discussed.

  2. Reliability and risk analysis data base development: an historical perspective

    Fragola, Joseph R


    Collection of empirical data and data base development for use in the prediction of the probability of future events has a long history. Dating back at least to the 17th century, safe passage events and mortality events were collected and analyzed to uncover prospective underlying classes and associated class attributes. Tabulations of these developed classes and associated attributes formed the underwriting basis for the fledgling insurance industry. Much earlier, master masons and architects used design rules of thumb to capture the experience of the ages and thereby produce structures of incredible longevity and reliability (Antona, E., Fragola, J. and Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings, Rome, Italy, 18-20 October 1993). These rules served so well in producing robust designs that it was not until almost the 19th century that the analysis (Charlton, T.M., A History Of Theory Of Structures In The 19th Century, Cambridge University Press, Cambridge, UK, 1982) of masonry voussoir arches, begun by Galileo some two centuries earlier (Galilei, G. Discorsi e dimostrazioni mathematiche intorno a due nuove science, (Discourses and mathematical demonstrations concerning two new sciences, Leiden, The Netherlands, 1638), was placed on a sound scientific basis. Still, with the introduction of new materials (such as wrought iron and steel) and the lack of theoretical knowledge and computational facilities, approximate methods of structural design abounded well into the second half of the 20th century. To this day structural designers account for material variations and gaps in theoretical knowledge by employing factors of safety (Benvenuto, E., An Introduction to the History of Structural Mechanics, Part II: Vaulted Structures and Elastic Systems, Springer-Verlag, NY, 1991) or codes of practice (ASME Boiler and Pressure Vessel Code, ASME, New York) originally developed in the 19th century (Antona, E., Fragola, J. and

  3. Test-Retest Reliability and Validity Results of the Youth Physical Activity Supports Questionnaire

    Sandy Slater


    Full Text Available As youth obesity rates remain at unacceptably high levels, particularly across underserved populations, the promotion of physical activity has become a focus of youth obesity prevention across the United States. Thus, the purpose of this study was to develop and test the reliability and validity of a self-reported questionnaire on home, school, and neighborhood physical activity environments for youth located in low-income urban minority neighborhoods and rural areas. Third-, fourth-, and fifth-grade students and their parents were recruited from six purposively selected elementary schools (three urban and three rural. A total of 205 parent/child dyads completed two waves of a 160-item take-home survey. Test-retest reliability was calculated for the student survey and validity was determined through a combination of parental and school administrator responses, and environmental audits. The majority (90% of measures had good reliability and validity (74%; defined as ≥70% agreement. These measures collected information on the presence of electronic and play equipment in youth participants’ bedrooms and homes, and outdoor play equipment at schools, as well as who youth are active with, and what people close to them think about being active. Measures that consistently had poor reliability and validity (≤70% agreement were weekly activities youth participated in and household rules. Principal components analysis was also used to identify 11 sub-scales. This survey can be used to help identify opportunities and develop strategies to encourage underserved youth to be more physically active.

  4. Application of FTA Method to Reliability Analysis of Vacuum Resin Shot Dosing Equipment


    Faults of vacuum resin shot dosing equipment are studied systematically and the fault tree of the system is constructed by using the fault tree analysis(FTA) method. Then the qualitative and quantitative analysis of the tree is carried out, respectively, and according to the results of the analysis, the measures to improve the system are worked out and implemented. As a result, the reliability of the equipment is enhanced greatly.


    Dustin Lawrence


    Full Text Available The purpose of this study was to inform decision makers at state and local levels, as well as property owners about the amount of water that can be supplied by rainwater harvesting systems in Texas so that it may be included in any future planning. Reliability of a rainwater tank is important because people want to know that a source of water can be depended on. Performance analyses were conducted on rainwater harvesting tanks for three Texas cities under different rainfall conditions and multiple scenarios to demonstrate the importance of optimizing rainwater tank design. Reliability curves were produced and reflect the percentage of days in a year that water can be supplied by a tank. Operational thresholds were reached in all scenarios and mark the point at which reliability increases by only 2% or less with an increase in tank size. A payback period analysis was conducted on tank sizes to estimate the amount of time it would take to recoup the cost of installing a rainwater harvesting system.

  6. A Research Roadmap for Computation-Based Human Reliability Analysis

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  7. Report on the analysis of field data relating to the reliability of solar hot water systems.

    Menicucci, David F. (Building Specialists, Inc., Albuquerque, NM)


    Utilities are overseeing the installations of thousand of solar hot water (SHW) systems. Utility planners have begun to ask for quantitative measures of the expected lifetimes of these systems so that they can properly forecast their loads. This report, which augments a 2009 reliability analysis effort by Sandia National Laboratories (SNL), addresses this need. Additional reliability data have been collected, added to the existing database, and analyzed. The results are presented. Additionally, formal reliability theory is described, including the bathtub curve, which is the most common model to characterize the lifetime reliability character of systems, and for predicting failures in the field. Reliability theory is used to assess the SNL reliability database. This assessment shows that the database is heavily weighted with data that describe the reliability of SHW systems early in their lives, during the warranty period. But it contains few measured data to describe the ends of SHW systems lives. End-of-life data are the most critical ones to define sufficiently the reliability of SHW systems in order to answer the questions that the utilities pose. Several ideas are presented for collecting the required data, including photometric analysis of aerial photographs of installed collectors, statistical and neural network analysis of energy bills from solar homes, and the development of simple algorithms to allow conventional SHW controllers to announce system failures and record the details of the event, similar to how aircraft black box recorders perform. Some information is also presented about public expectations for the longevity of a SHW system, information that is useful in developing reliability goals.

  8. Fifty Years of THERP and Human Reliability Analysis

    Ronald L. Boring


    In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø National Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.

  9. Reliability and Robustness Analysis of the Masinga Dam under Uncertainty

    Hayden Postle-Floyd


    Full Text Available Kenya’s water abstraction must meet the projected growth in municipal and irrigation demand by the end of 2030 in order to achieve the country’s industrial and economic development plan. The Masinga dam, on the Tana River, is the key to meeting this goal to satisfy the growing demands whilst also continuing to provide hydroelectric power generation. This study quantitatively assesses the reliability and robustness of the Masinga dam system under uncertain future supply and demand using probabilistic climate and population projections, and examines how long-term planning may improve the longevity of the dam. River flow and demand projections are used alongside each other as inputs to the dam system simulation model linked to an optimisation engine to maximise water availability. Water availability after demand satisfaction is assessed for future years, and the projected reliability of the system is calculated for selected years. The analysis shows that maximising power generation on a short-term year-by-year basis achieves 80%, 50% and 1% reliability by 2020, 2025 and 2030 onwards, respectively. Longer term optimal planning, however, has increased system reliability to up to 95% in 2020, 80% in 2025, and more than 40% in 2030 onwards. In addition, increasing the capacity of the reservoir by around 25% can significantly improve the robustness of the system for all future time periods. This study provides a platform for analysing the implication of different planning and management of Masinga dam and suggests that careful consideration should be given to account for growing municipal needs and irrigation schemes in both the immediate and the associated Tana River basin.

  10. Reliability, risk and availability analysis and evaluation of a port oil pipeline transportation system in constant operation conditions

    Kolowrocki, Krzysztof [Gdynia Maritime University, Gdynia (Poland)


    In the paper the multi-state approach to the analysis and evaluation of systems' reliability, risk and availability is practically applied. Theoretical definitions and results are illustrated by the example of their application in the reliability, risk and availability evaluation of an oil pipeline transportation system. The pipeline transportation system is considered in the constant in time operation conditions. The system reliability structure and its components reliability functions are not changing in constant operation conditions. The system reliability structure is fixed with a high accuracy. Whereas, the input reliability characteristics of the pipeline components are not sufficiently exact because of the lack of statistical data necessary for their estimation. The results may be considered as an illustration of the proposed methods possibilities of applications in pipeline systems reliability analysis. (author)

  11. Human Performance Modeling for Dynamic Human Reliability Analysis

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory


    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  12. Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Enevoldsen, I.; Sørensen, John Dalsgaard;

    In this paper a reliability analysis of a Mono-tower platform is presented. The failure modes, considered, are yelding in the tube cross-sections, and fatigue failure in the butt welds. The fatigue failure mode is investigated with a fatigue model, where the fatigue strength is expressed through SN...... for the fatigue limit state is a significant failure mode for the Mono.tower platform. Further, it is shown for the fatigue failure mode the the largest contributions to the overall uncertainty are due to the damping ratio, the inertia coefficient, the stress concentration factor, the model uncertainties...

  13. Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Enevoldsen, I.; Sørensen, John Dalsgaard;


    In this paper, a reliability analysis of a Mono-tower platform is presented. Te failure modes considered are yielding in the tube cross sections and fatigue failure in the butts welds. The fatigue failrue mode is investigated with a fatigue model, where the fatigue strength is expressed through SN...... that the fatigue limit state is a significant failure mode for the Mono-tower platform. Further, it is shown for the fatigue failure mode that the largest contributions to the overall uncertainty are due to the damping ratio, the inertia coefficient, the stress concentration factor, the model uncertainties...

  14. Fault Diagnosis and Reliability Analysis Using Fuzzy Logic Method

    Miao Zhinong; Xu Yang; Zhao Xiangyu


    A new fuzzy logic fault diagnosis method is proposed. In this method, fuzzy equations are employed to estimate the component state of a system based on the measured system performance and the relationship between component state and system performance which is called as "performance-parameter" knowledge base and constructed by expert. Compared with the traditional fault diagnosis method, this fuzzy logic method can use humans intuitive knowledge and dose not need a precise mapping between system performance and component state. Simulation proves its effectiveness in fault diagnosis. Then, the reliability analysis is performed based on the fuzzy logic method.

  15. Asymptotic Sampling for Reliability Analysis of Adhesive Bonded Stepped Lap Composite Joints

    Kimiaeifar, Amin; Lund, Erik; Thomsen, Ole Thybo


    Reliability analysis coupled with finite element analysis (FEA) of composite structures is computationally very demanding and requires a large number of simulations to achieve an accurate prediction of the probability of failure with a small standard error. In this paper Asymptotic Sampling, which....... Three dimensional (3D) FEA is used for the structural analysis together with a design equation that is associated with a deterministic code-based design equation where reliability is secured by partial safety factors. The Tsai-Wu and the maximum principal stress failure criteria are used to predict...... failure in the composite and adhesive layers, respectively, and the results are compared with the target reliability level implicitly used in the wind turbine standard IEC 61400-1. The accuracy and efficiency of Asymptotic Sampling is investigated by comparing the results with predictions obtained using...

  16. Integration of human reliability analysis into the high consequence process

    Houghton, F.K.; Morzinski, J.


    When performing a hazards analysis (HA) for a high consequence process, human error often plays a significant role in the hazards analysis. In order to integrate human error into the hazards analysis, a human reliability analysis (HRA) is performed. Human reliability is the probability that a person will correctly perform a system-required activity in a required time period and will perform no extraneous activity that will affect the correct performance. Even though human error is a very complex subject that can only approximately be addressed in risk assessment, an attempt must be made to estimate the effect of human errors. The HRA provides data that can be incorporated in the hazard analysis event. This paper will discuss the integration of HRA into a HA for the disassembly of a high explosive component. The process was designed to use a retaining fixture to hold the high explosive in place during a rotation of the component. This tool was designed as a redundant safety feature to help prevent a drop of the explosive. This paper will use the retaining fixture to demonstrate the following HRA methodology`s phases. The first phase is to perform a task analysis. The second phase is the identification of the potential human, both cognitive and psychomotor, functions performed by the worker. During the last phase the human errors are quantified. In reality, the HRA process is an iterative process in which the stages overlap and information gathered in one stage may be used to refine a previous stage. The rationale for the decision to use or not use the retaining fixture and the role the HRA played in the decision will be discussed.

  17. Dynamic Scapular Movement Analysis: Is It Feasible and Reliable in Stroke Patients during Arm Elevation?

    De Baets, Liesbet; Van Deun, Sara; Desloovere, Kaat; Jaspers, Ellen


    Knowledge of three-dimensional scapular movements is essential to understand post-stroke shoulder pain. The goal of the present work is to determine the feasibility and the within and between session reliability of a movement protocol for three-dimensional scapular movement analysis in stroke patients with mild to moderate impairment, using an optoelectronic measurement system. Scapular kinematics of 10 stroke patients and 10 healthy controls was recorded on two occasions during active anteflexion and abduction from 0° to 60° and from 0° to 120°. All tasks were executed unilaterally and bilaterally. The protocol’s feasibility was first assessed, followed by within and between session reliability of scapular total range of motion (ROM), joint angles at start position and of angular waveforms. Additionally, measurement errors were calculated for all parameters. Results indicated that the protocol was generally feasible for this group of patients and assessors. Within session reliability was very good for all tasks. Between sessions, scapular angles at start position were measured reliably for most tasks, while scapular ROM was more reliable during the 120° tasks. In general, scapular angles showed higher reliability during anteflexion compared to abduction, especially for protraction. Scapular lateral rotations resulted in smallest measurement errors. This study indicates that scapular kinematics can be measured reliably and with precision within one measurement session. In case of multiple test sessions, further methodological optimization is required for this protocol to be suitable for clinical decision-making and evaluation of treatment efficacy. PMID:24244414

  18. Dynamic scapular movement analysis: is it feasible and reliable in stroke patients during arm elevation?

    Liesbet De Baets

    Full Text Available Knowledge of three-dimensional scapular movements is essential to understand post-stroke shoulder pain. The goal of the present work is to determine the feasibility and the within and between session reliability of a movement protocol for three-dimensional scapular movement analysis in stroke patients with mild to moderate impairment, using an optoelectronic measurement system. Scapular kinematics of 10 stroke patients and 10 healthy controls was recorded on two occasions during active anteflexion and abduction from 0° to 60° and from 0° to 120°. All tasks were executed unilaterally and bilaterally. The protocol's feasibility was first assessed, followed by within and between session reliability of scapular total range of motion (ROM, joint angles at start position and of angular waveforms. Additionally, measurement errors were calculated for all parameters. Results indicated that the protocol was generally feasible for this group of patients and assessors. Within session reliability was very good for all tasks. Between sessions, scapular angles at start position were measured reliably for most tasks, while scapular ROM was more reliable during the 120° tasks. In general, scapular angles showed higher reliability during anteflexion compared to abduction, especially for protraction. Scapular lateral rotations resulted in smallest measurement errors. This study indicates that scapular kinematics can be measured reliably and with precision within one measurement session. In case of multiple test sessions, further methodological optimization is required for this protocol to be suitable for clinical decision-making and evaluation of treatment efficacy.

  19. Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis

    Zhou, Z.-G.; Tang, P.; Zhou, M.


    Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  20. Inclusion of fatigue effects in human reliability analysis

    Griffith, Candice D. [Vanderbilt University, Nashville, TN (United States); Mahadevan, Sankaran, E-mail: [Vanderbilt University, Nashville, TN (United States)


    The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: >We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. > We discuss the difficulties in defining and measuring fatigue. > We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.

  1. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Ronald L. Boring


    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  2. Task analysis and computer aid development for human reliability analysis in nuclear power plants

    Yoon, W. C.; Kim, H.; Park, H. S.; Choi, H. H.; Moon, J. M.; Heo, J. Y.; Ham, D. H.; Lee, K. K.; Han, B. T. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)


    Importance of human reliability analysis (HRA) that predicts the error's occurrence possibility in a quantitative and qualitative manners is gradually increased by human errors' effects on the system's safety. HRA needs a task analysis as a virtue step, but extant task analysis techniques have the problem that a collection of information about the situation, which the human error occurs, depends entirely on HRA analyzers. The problem makes results of the task analysis inconsistent and unreliable. To complement such problem, KAERI developed the structural information analysis (SIA) that helps to analyze task's structure and situations systematically. In this study, the SIA method was evaluated by HRA experts, and a prototype computerized supporting system named CASIA (Computer Aid for SIA) was developed for the purpose of supporting to perform HRA using the SIA method. Additionally, through applying the SIA method to emergency operating procedures, we derived generic task types used in emergency and accumulated the analysis results in the database of the CASIA. The CASIA is expected to help HRA analyzers perform the analysis more easily and consistently. If more analyses will be performed and more data will be accumulated to the CASIA's database, HRA analyzers can share freely and spread smoothly his or her analysis experiences, and there by the quality of the HRA analysis will be improved. 35 refs., 38 figs., 25 tabs. (Author)

  3. Transient Reliability Analysis Capability Developed for CARES/Life

    Nemeth, Noel N.


    The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has

  4. Gearbox Reliability Collaborative Phase 1 and 2: Testing and Modeling Results; Preprint

    Keller, J.; Guo, Y.; LaCava, W.; Link, H.; McNiff, B.


    The Gearbox Reliability Collaborative (GRC) investigates root causes of wind turbine gearbox premature failures and validates design assumptions that affect gearbox reliability using a combined testing and modeling approach. Knowledge gained from the testing and modeling of the GRC gearboxes builds an understanding of how the selected loads and events translate into internal responses of three-point mounted gearboxes. This paper presents some testing and modeling results of the GRC research during Phase 1 and 2. Non-torque loads from the rotor including shaft bending and thrust, traditionally assumed to be uncoupled with gearbox, affect gear and bearing loads and resulting gearbox responses. Bearing clearance increases bearing loads and causes cyclic loading, which could contribute to a reduced bearing life. Including flexibilities of key drivetrain subcomponents is important in order to reproduce the measured gearbox response during the tests using modeling approaches.

  5. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.


    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  6. Windfarm Generation Assessment for Reliability Analysis of Power Systems

    Barberis Negra, Nicola; Bak-Jensen, Birgitte; Holmstrøm, O.


    a significant role in this assessment and different models have been created for it, but a representation which includes all of them has not been developed yet. This paper deals with this issue. First, a list of nine influencing Factors is presented and discussed. Secondly, these Factors are included...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  7. Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems

    Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte


    a significant role in this assessment and different models have been created for it, but a representation which includes all of them has not been developed yet. This paper deals with this issue. First, a list of nine influencing Factors is presented and discussed. Secondly, these Factors are included...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  8. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.


    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  9. Productivity enhancement and reliability through AutoAnalysis

    Garetto, Anthony; Rademacher, Thomas; Schulz, Kristian


    The decreasing size and increasing complexity of photomask features, driven by the push to ever smaller technology nodes, places more and more challenges on the mask house, particularly in terms of yield management and cost reduction. Particularly challenging for mask shops is the inspection, repair and review cycle which requires more time and skill from operators due to the higher number of masks required per technology node and larger nuisance defect counts. While the measurement throughput of the AIMS™ platform has been improved in order to keep pace with these trends, the analysis of aerial images has seen little advancement and remains largely a manual process. This manual analysis of aerial images is time consuming, dependent on the skill level of the operator and significantly contributes to the overall mask manufacturing process flow. AutoAnalysis, the first application available for the FAVOR® platform, offers a solution to these problems by providing fully automated analysis of AIMS™ aerial images. Direct communication with the AIMS™ system allows automated data transfer and analysis parallel to the measurements. User defined report templates allow the relevant data to be output in a manner that can be tailored to various internal needs and support the requests of your customers. Productivity is significantly improved due to the fast analysis, operator time is saved and made available for other tasks and reliability is no longer a concern as the most defective region is always and consistently captured. In this paper the concept and approach of AutoAnalysis will be presented as well as an update to the status of the project. The benefits arising from the use of AutoAnalysis will be discussed in more detail and a study will be performed in order to demonstrate.

  10. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Clayson, Peter E; Miller, Gregory A


    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue.

  11. Reliability analysis and updating of deteriorating systems with subset simulation

    Schneider, Ronald; Thöns, Sebastian; Straub, Daniel


    Bayesian updating of the system deterioration model. The updated system reliability is then obtained through coupling the updated deterioration model with a probabilistic structural model. The underlying high-dimensional structural reliability problems are solved using subset simulation, which...

  12. Suitability Analysis of Continuous-Use Reliability Growth Projection Models


    exists for all types, shapes, and sizes. The primary focus of this study is a comparison of reliability growth projection models designed for...requirements to use reliability growth models, recent studies have noted trends in reliability failures throughout the DoD. In [14] Dr. Michael a strict exponential distribu- tion was used to stay within their assumptions. In reality, however, reliability growth models often must be used

  13. Applicability of simplified human reliability analysis methods for severe accidents

    Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)


    Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)

  14. Reliability analysis of production ships with emphasis on load combination and ultimate strength

    Wang, Xiaozhi


    This thesis deals with ultimate strength and reliability analysis of offshore production ships, accounting for stochastic load combinations, using a typical North Sea production ship for reference. A review of methods for structural reliability analysis is presented. Probabilistic methods are established for the still water and vertical wave bending moments. Linear stress analysis of a midships transverse frame is carried out, four different finite element models are assessed. Upon verification of the general finite element code ABAQUS with a typical ship transverse girder example, for which test results are available, ultimate strength analysis of the reference transverse frame is made to obtain the ultimate load factors associated with the specified pressure loads in Det norske Veritas Classification rules for ships and rules for production vessels. Reliability analysis is performed to develop appropriate design criteria for the transverse structure. It is found that the transverse frame failure mode does not seem to contribute to the system collapse. Ultimate strength analysis of the longitudinally stiffened panels is performed, accounting for the combined biaxial and lateral loading. Reliability based design of the longitudinally stiffened bottom and deck panels is accomplished regarding the collapse mode under combined biaxial and lateral loads. 107 refs., 76 refs., 37 tabs.

  15. Reliability of the ATD Angle in Dermatoglyphic Analysis.

    Brunson, Emily K; Hohnan, Darryl J; Giovas, Christina M


    The "ATD" angle is a dermatoglyphic trait formed by drawing lines between the triradii below the first and last digits and the most proximal triradius on the hypothenar region of the palm. This trait has been widely used in dermatoglyphic studies, but several researchers have questioned its utility, specifically whether or not it can be measured reliably. The purpose of this research was to examine the measurement reliability of this trait. Finger and palm prints were taken using the carbon paper and tape method from the right and left hands of 100 individuals. Each "ATD" angle was read twice, at different times, by Reader A, using a goniometer and a magnifying glass, and three times by a Reader B, using Adobe Photoshop. Inter-class correlation coefficients were estimated for the intra- and inter-reader measurements of the "ATD" angles. Reader A was able to quantify ATD angles on 149 out of 200 prints (74.5%), and Reader B on 179 out of 200 prints (89.5%). Both readers agreed on whether an angle existed on a print 89.8% of the time for the right hand and 78.0% for the left. Intra-reader correlations were 0.97 or greater for both readers. Inter-reader correlations for "ATD" angles measured by both readers ranged from 0.92 to 0.96. These results suggest that the "ATD" angle can be measured reliably, and further imply that measurement using a software program may provide an advantage over other methods.

  16. Vibration reliability analysis for aeroengine compressor blade based on support vector machine response surface method

    GAO Hai-feng; BAI Guang-chen


    To ameliorate reliability analysis efficiency for aeroengine components, such as compressor blade, support vector machine response surface method (SRSM) is proposed. SRSM integrates the advantages of support vector machine (SVM) and traditional response surface method (RSM), and utilizes experimental samples to construct a suitable response surface function (RSF) to replace the complicated and abstract finite element model. Moreover, the randomness of material parameters, structural dimension and operating condition are considered during extracting data so that the response surface function is more agreeable to the practical model. The results indicate that based on the same experimental data, SRSM has come closer than RSM reliability to approximating Monte Carlo method (MCM); while SRSM (17.296 s) needs far less running time than MCM (10958 s) and RSM (9840 s). Therefore, under the same simulation conditions, SRSM has the largest analysis efficiency, and can be considered a feasible and valid method to analyze structural reliability.

  17. Method and Application for Reliability Analysis of Measurement Data in Nuclear Power Plant

    Yun, Hun; Hwang, Kyeongmo; Lee, Hyoseoung [KEPCO E and C, Seoungnam (Korea, Republic of); Moon, Seungjae [Hanyang University, Seoul (Korea, Republic of)


    Pipe wall-thinning by flow-accelerated corrosion and various types of erosion is significant damage in secondary system piping of nuclear power plants(NPPs). All NPPs in Korea have management programs to ensure pipe integrity from degradation mechanisms. Ultrasonic test(UT) is widely used for pipe wall thickness measurement. Numerous UT measurements have been performed during scheduled outages. Wall-thinning rates are determined conservatively according to several evaluation methods developed by Electric Power Research Institute(EPRI). The issue of reliability caused by measurement error should be considered in the process of evaluation. The reliability analysis method was developed for single and multiple measurement data in the previous researches. This paper describes the application results of reliability analysis method to real measurement data during scheduled outage and proved its benefits.

  18. Reliability Index for Reinforced Concrete Frames using Nonlinear Pushover and Dynamic Analysis

    Ahmad A. Fallah


    Full Text Available In the conventional design and analysis methods affecting parameters loads, materials' strength, etc are not set as probable variables. Safety factors in the current Codes and Standards are usually obtained on the basis of judgment and experience, which may be improper or uneconomical. In technical literature, a method based on nonlinear static analysis is suggested to set Reliability Index on strength of structural systems. In this paper, a method based on Nonlinear Dynamic analysis with rising acceleration (or Incremental Dynamic Analysis is introduced, the results of which are compared with those of the previous (Static Pushover Analysis method and two concepts namely Redundancy Strength and Redundancy Variations are proposed as an index to these impacts. The Redundancy Variation Factor and Redundancy Strength Factor indices for reinforced concrete frames with varying number of bays and stories and different ductility potentials are computed and ultimately, Reliability Index is determined using these two indices.

  19. Reliability analysis on a shell and tube heat exchanger

    Lingeswara, S.; Omar, R.; Mohd Ghazi, T. I.


    A shell and tube heat exchanger reliability was done in this study using past history data from a carbon black manufacturing plant. The heat exchanger reliability study is vital in all related industries as inappropriate maintenance and operation of the heat exchanger will lead to major Process Safety Events (PSE) and loss of production. The overall heat exchanger coefficient/effectiveness (Uo) and Mean Time between Failures (MTBF) were analyzed and calculated. The Aspen and down time data was taken from a typical carbon black shell and tube heat exchanger manufacturing plant. As a result of the Uo calculated and analyzed, it was observed that the Uo declined over a period caused by severe fouling and heat exchanger limitation. This limitation also requires further burn out period which leads to loss of production. The MTBF calculated is 649.35 hours which is very low compared to the standard 6000 hours for the good operation of shell and tube heat exchanger. The guidelines on heat exchanger repair, preventive and predictive maintenance was identified and highlighted for better heat exchanger inspection and repair in the future. The fouling of heat exchanger and the production loss will be continuous if proper heat exchanger operation and repair using standard operating procedure is not followed.

  20. Electrogastrographic norms in children: toward the development of standard methods, reproducible results, and reliable normative data.

    Levy, J; Harris, J; Chen, J; Sapoznikov, D; Riley, B; De La Nuez, W; Khaskelberg, A


    Surface electrogastrography (EGG) is a noninvasive technique that detects gastric myoelectrical electric activity, principally the underlying pacemaker activity generated by the specialized interstitial cells of Cajal. Interest in the use of this methodology has grown because of its potential applications in describing functional gastrointestinal disorders, particularly as a tool in the evaluation of nausea, anorexia, and other dyspeptic symptoms. Fifty-five healthy volunteers (27 female), ranging in age from 6 to 18 years (mean, 11.7 years), were studied for a 1-hour baseline preprandial period and a 1-hour postprandial period after consumption of a standard 448-kcal meal. Recordings were obtained with an EGG Digitrapper or modified Polygraph (Medtronic-Synectics, Shoreview, MN). Spectral analysis by an autoregressive moving average method was used to extract numerical data on the power and frequency of gastric electrical activity from the EGG signal. The authors present normative data for healthy children and adolescents studied under a standardized protocol. Mean dominant frequency was found to be 2.9 +/- 0.40 cycles per minute preprandially and 3.1 +/- 0.35 postprandially, with 80% +/- 13% of test time spent in the normogastric range (2-4 cycles per minute) before and 85% +/- 11% after the test meal. The response of several key parameters to meal consumption was considered, and the effects of age, gender, and body mass index (BMI) on the EGG were sought. There is a postprandial increase in the rhythmicity and amplitude of gastric slow waves, as other investigators have shown in adults. Key normative values are not dependent on age, gender, or BMI. The authors discuss limitations in the data set and its interpretability. The authors establish a normative data set after developing a standardized recording protocol and test meal and show that EGG recordings can be obtained reliably in the pediatric population. Development of similar norms by investigators using

  1. The Stress and Reliability Analysis of HTR’s Graphite Component

    Xiang Fang


    Full Text Available The high temperature gas cooled reactor (HTR is developing rapidly toward a modular, compact, and integral direction. As the main structure material, graphite plays a very important role in HTR engineering, and the reliability of graphite component has a close relationship with the integrity of reactor core. The graphite components are subjected to high temperature and fast neutron irradiation simultaneously during normal operation of the reactor. With the stress accumulation induced by high temperature and irradiation, the failure risk of graphite components increases constantly. Therefore it is necessary to study and simulate the mechanical behavior of graphite component under in-core working conditions and forecast the internal stress accumulation history and the variation of reliability. The work of this paper focuses on the mechanical analysis of pebble-bed type HTR's graphite brick. The analysis process is comprised of two procedures, stress analysis and reliability analysis. Three different creep models and two different reliability models are reviewed and taken into account in simulation. The stress and failure probability calculation results are obtained and discussed. The results gained with various models are highly consistent, and the discrepancies are acceptable.


    Dars, P.; Ternisien D'Ouville, T.; Mingam, H.; Merckel, G.


    Statistical analysis of asymmetry in LDD NMOSFETs electrical characteristics shows the influence of implantation angles on non-overlap variation observed on devices realized on a 100 mm wafer and within the wafers of a batch . The study of the consequence of this dispersion on the aging behaviour illustrates the importance of this parameter for reliability and the necessity to take it in account for accurate analysis of stress results.

  3. An Efficient Approach for the Reliability Analysis of Phased-Mission Systems with Dependent Failures

    Xing, Liudong; Meshkat, Leila; Donahue, Susan K.


    We consider the reliability analysis of phased-mission systems with common-cause failures in this paper. Phased-mission systems (PMS) are systems supporting missions characterized by multiple, consecutive, and nonoverlapping phases of operation. System components may be subject to different stresses as well as different reliability requirements throughout the course of the mission. As a result, component behavior and relationships may need to be modeled differently from phase to phase when performing a system-level reliability analysis. This consideration poses unique challenges to existing analysis methods. The challenges increase when common-cause failures (CCF) are incorporated in the model. CCF are multiple dependent component failures within a system that are a direct result of a shared root cause, such as sabotage, flood, earthquake, power outage, or human errors. It has been shown by many reliability studies that CCF tend to increase a system's joint failure probabilities and thus contribute significantly to the overall unreliability of systems subject to CCF.We propose a separable phase-modular approach to the reliability analysis of phased-mission systems with dependent common-cause failures as one way to meet the above challenges in an efficient and elegant manner. Our methodology is twofold: first, we separate the effects of CCF from the PMS analysis using the total probability theorem and the common-cause event space developed based on the elementary common-causes; next, we apply an efficient phase-modular approach to analyze the reliability of the PMS. The phase-modular approach employs both combinatorial binary decision diagram and Markov-chain solution methods as appropriate. We provide an example of a reliability analysis of a PMS with both static and dynamic phases as well as CCF as an illustration of our proposed approach. The example is based on information extracted from a Mars orbiter project. The reliability model for this orbiter considers

  4. Modified Core Wash Cytology: A reliable same day biopsy result for breast clinics.

    Bulte, J P; Wauters, C A P; Duijm, L E M; de Wilt, J H W; Strobbe, L J A


    Fine Needle Aspiration Biopsy (FNAB), Core Needle biopsy (CNB) and hybrid techniques including Core Wash Cytology (CWC) are available for same-day diagnosis in breast lesions. In CWC a washing of the biopsy core is processed for a provisional cytological diagnosis, after which the core is processed like a regular CNB. This study focuses on the reliability of CWC in daily practice. All consecutive CWC procedures performed in a referral breast centre between May 2009 and May 2012 were reviewed, correlating CWC results with the CNB result, definitive diagnosis after surgical resection and/or follow-up. Symptomatic as well as screen-detected lesions, undergoing CNB were included. 1253 CWC procedures were performed. Definitive histology showed 849 (68%) malignant and 404 (32%) benign lesions. 80% of CWC procedures yielded a conclusive diagnosis: this percentage was higher amongst malignant lesions and lower for benign lesions: 89% and 62% respectively. Sensitivity and specificity of a conclusive CWC result were respectively 98.3% and 90.4%. The eventual incidence of malignancy in the cytological 'atypical' group (5%) was similar to the cytological 'benign' group (6%). CWC can be used to make a reliable provisional diagnosis of breast lesions within the hour. The high probability of conclusive results in malignant lesions makes CWC well suited for high risk populations. Copyright © 2016 Elsevier Ltd, BASO ~ the Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  5. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian


    Reliability analysis of fiber-reinforced composite structures is a relatively unexplored field, and it is therefore expected that engineers and researchers trying to apply such an approach will meet certain challenges until more knowledge is accumulated. While doing the analyses included...... in the present paper, the authors have experienced some of the possible pitfalls on the way to complete a precise and robust reliability analysis for layered composites. Results showed that in order to obtain accurate reliability estimates it is necessary to account for the various failure modes described...... by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  6. Signal Quality Outage Analysis for Ultra-Reliable Communications in Cellular Networks

    Gerardino, Guillermo Andrés Pocovi; Alvarez, Beatriz Soret; Lauridsen, Mads


    , we investigate the potential of several techniques to combat these main threats. The analysis shows that traditional microscopic multiple-input multiple-output schemes with 2x2 or 4x4 antenna configurations are not enough to fulfil stringent reliability requirements. It is revealed how such antenna...... schemes must be complemented with macroscopic diversity as well as interference management techniques in order to ensure the necessary SINR outage performance. Based on the obtained performance results, it is discussed which of the feasible options fulfilling the ultra-reliable criteria are most promising...

  7. Failure Analysis towards Reliable Performance of Aero-Engines

    T. Jayakumar


    Full Text Available Aero-engines are critical components whose reliable performance decides the primary safety of anaircrafthelicopter. This is met by rigorous maintenance schedule with periodic inspection/nondestructive testingof various engine components. In spite of these measures, failure of areo-engines do occur rather frequentlyin comparison to failure of other components. Systematic failure analysis helps one to identify root causeof the failure, thus enabling remedial measures to prevent recurrence of such failures. Turbine blades madeof nickel or cobalt-based alloys are used in aero-engines. These blades are subjected to complex loadingconditions at elevated temperatures. The main causes of failure of blades are attributed to creep, thermalfatigue and hot corrosion. Premature failure of blades in the combustion zone was reported in one of theaero-engines. The engine had both the compressor and the free-turbine in a common shaft. Detailedfailure analysis revealed the presence of creep voids in the blades that failed. Failure of turbine bladeswas also detected in another aero-engine operating in a coastal environment. In this failure, the protectivecoating on the blades was cracked at many locations. Grain boundary spikes were observed on these locations.The primary cause of this failure was the hot corrosion followed by creep damage

  8. Multi-Unit Considerations for Human Reliability Analysis

    St. Germain, S.; Boring, R.; Banaseanu, G.; Akl, Y.; Chatri, H.


    This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact the selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.

  9. Fuzzy Reliability Analysis of the Shaft of a Steam Turbine


    Field surveying shows that the failure of the steam turbine's coupling is due to fatigue that is caused by compound stress. Fuzzy mathematics was applied to get the membership function of the fatigue strength rule. A formula of fuzzy reliability of the coupling was derived and a theory of coupling's fuzzy reliability is set up. The calculating method of the fuzzy reliability is explained by an illustrative example.

  10. Reliability of videotaped observational gait analysis in patients with orthopedic impairments

    Brunnekreef, J.J.; Uden, C. van; Moorsel, S. van; Kooloos, J.G.M.


    BACKGROUND: In clinical practice, visual gait observation is often used to determine gait disorders and to evaluate treatment. Several reliability studies on observational gait analysis have been described in the literature and generally showed moderate reliability. However, patients with orthopedic

  11. Wind energy Computerized Maintenance Management System (CMMS) : data collection recommendations for reliability analysis.

    Peters, Valerie A.; Ogilvie, Alistair; Veers, Paul S.


    This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a list of the data needed to support reliability and availability analysis, and gives specific recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of fielded wind turbines. This report is intended to help the reader develop a basic understanding of what data are needed from a Computerized Maintenance Management System (CMMS) and other data systems, for reliability analysis. The report provides: (1) a list of the data needed to support reliability and availability analysis; and (2) specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and a wider variety of analysis and reporting needs.

  12. Milk urea analytical result reliability and its methodical possibilities in the Czech Republic

    Jan Říha


    Full Text Available Control of milk urea concentration (MUC can be used in diagnosis of the energy–nitrogen metabolism of cows. There are more analytical methods for MUC estimation and there are also discussions about their result reliability. Aim of this work was to obtain information for MUC result reliability improvement. MUC and MUN (milk urea nitrogen were investigated in 5 milk sample sets and in 7 calibration/comparison experiments. The positions of reference and indirect methods were changed in experiments. There were following analytical methods for MUC or MUN (in mg.100 ml−1: – photometric method (PH, as reference based on paradimethylaminobenzaldehyde reaction; – method Ureakvant (UK, as reference based on difference measurement of the electrical conductivity change during ureolysis; – method Chemspec (CH based on photometrical measurement of ammonia concentration after ureolysis (as reference; – spectroscopic method in mid infrared range of spectrum (FT–MIR; indirect routine method. In all methodical combinations the correlation coefficients (r varied from 0.8803 to 0.9943 (P −1 and comparable values of repeatability (from 0.65 to 1.83 mg.100 ml−1 as compared to FT–MIR MUC or MUN methods (from 1.39 to 5.6 and from 0.76 to 1.92 mg.100 ml−1 in performed experiments.

  13. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata


    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  14. The design and use of reliability data base with analysis tool

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.


    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst`s current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs.

  15. A model for reliability analysis and calculation applied in an example from chemical industry

    Pejović Branko B.


    Full Text Available The subject of the paper is reliability design in polymerization processes that occur in reactors of a chemical industry. The designed model is used to determine the characteristics and indicators of reliability, which enabled the determination of basic factors that result in a poor development of a process. This would reduce the anticipated losses through the ability to control them, as well as enabling the improvement of the quality of production, which is the major goal of the paper. The reliability analysis and calculation uses the deductive method based on designing of a scheme for fault tree analysis of a system based on inductive conclusions. It involves the use standard logical symbols and rules of Boolean algebra and mathematical logic. The paper eventually gives the results of the work in the form of quantitative and qualitative reliability analysis of the observed process, which served to obtain complete information on the probability of top event in the process, as well as objective decision making and alternative solutions.

  16. Validity and reliability of patient reported outcomes used in Psoriasis: results from two randomized clinical trials

    Koo John


    Full Text Available Abstract Background Two Phase III randomized controlled clinical trials were conducted to assess the efficacy, safety, and tolerability of weekly subcutaneous administration of efalizumab for the treatment of psoriasis. Patient reported measures of psoriasis-related functionality and health-related quality of life and of psoriasis-related symptom assessments were included as part of the trials. Objective To assess the reliability, validity, and responsiveness of the patient reported outcome measures that were used in the trials – the Dermatology Life Quality Index (DLQI, the Psoriasis Symptom Assessment (PSA Scale, and two itch measures, a Visual Analog Scale (VAS and the National Psoriasis Foundation (NPF itch measure. Methods Subjects aged 18 to 70 years with moderate to severe psoriasis for at least 6 months were recruited into the two clinical trials (n = 1095. Internal consistency reliability was evaluated for all patient reported outcomes at baseline and at 12 weeks. Construct validity was evaluated by relations among the different patient reported outcomes and between the patient reported outcomes and the clinical assessments (Psoriasis Area and Severity Index; Overall Lesion Severity Scale; Physician's Global Assessment of Change assessed at baseline and at 12 weeks, as was the change over the course of the 12 week portion of the trial. Results Internal consistency reliability ranged from 0.86 to 0.95 for the patient reported outcome measures. The patient reported outcome measures were all shown to have significant construct validity with respect to each other and with respect to the clinical assessments. The four measures also demonstrated significant responsiveness to change in underlying clinical status of the patients over the course of the trial, as measured by the independently assessed clinical outcomes. Conclusions The DLQI, the PSA, VAS, and the NPF are considered useful tools for the measurement of dermatology

  17. Effectiveness and reliability analysis of emergency measures for flood prevention

    Lendering, K.T.; Jonkman, S.N.; Kok, M.


    During flood events emergency measures are used to prevent breaches in flood defences. However, there is still limited insight in their reliability and effectiveness. The objective of this paper is to develop a method to determine the reliability and effectiveness of emergency measures for flood

  18. Effectiveness and reliability analysis of emergency measures for flood prevention

    Lendering, K.T.; Jonkman, S.N.; Kok, M.


    During flood events emergency measures are used to prevent breaches in flood defences. However, there is still limited insight in their reliability and effectiveness. The objective of this paper is to develop a method to determine the reliability and effectiveness of emergency measures for flood def

  19. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    Bell, B.J.; Swain, A.D.


    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study.

  20. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)


    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  1. Advanced response surface method for mechanical reliability analysis

    L(U) Zhen-zhou; ZHAO Jie; YUE Zhu-feng


    Based on the classical response surface method (RSM), a novel RSM using improved experimental points (EPs) is presented for reliability analysis. Two novel points are included in the presented method. One is the use of linear interpolation, from which the total EPs for determining the RS are selected to be closer to the actual failure surface;the other is the application of sequential linear interpolation to control the distance between the surrounding EPs and the center EP, by which the presented method can ensure that the RS fits the actual failure surface in the region of maximum likelihood as the center EPs converge to the actual most probable point (MPP). Since the fitting precision of the RS to the actual failure surface in the vicinity of the MPP, which has significant contribution to the probability of the failure surface being exceeded, is increased by the presented method, the precision of the failure probability calculated by RS is increased as well. Numerical examples illustrate the accuracy and efficiency of the presented method.

  2. Dimensionality and reliability of the self-care of heart failure index scales: further evidence from confirmatory factor analysis.

    Barbaranelli, Claudio; Lee, Christopher S; Vellone, Ercole; Riegel, Barbara


    The Self-Care of Heart Failure Index (SCHFI) is used widely, but issues with reliability have been evident. Cronbach alpha coefficient is usually used to assess reliability, but this approach assumes a unidimensional scale. The purpose of this article is to address the dimensionality and internal consistency reliability of the SCHFI. This was a secondary analysis of data from 629 adults with heart failure enrolled in three separate studies conducted in the northeastern and northwestern United States. Following testing for scale dimensionality using confirmatory factor analysis, reliability was tested using coefficient alpha and alternative options. Confirmatory factor analysis demonstrated that: (a) the Self-Care Maintenance Scale has a multidimensional four-factor structure; (b) the Self-Care Management Scale has a two-factor structure, but the primary factors loaded on a common higher-order factor; and (c) the Self-Care Confidence Scale is unidimensional. Reliability estimates for the three scales, obtained with methods compatible with each scale's dimensionality, were adequate or high. The results of the analysis demonstrate that issues of dimensionality and reliability cannot be separated. Appropriate estimates of reliability that are consistent with the dimensionality of the scale must be used. In the case of the SCHFI, coefficient alpha should not be used to assess reliability of the self-care maintenance and the self-care management scales, due to their multidimensionality. When performing psychometric evaluations, we recommend testing dimensionality before assessing reliability, as well using multiple indices of reliability, such as model-based internal consistency, composite reliability, and omega and maximal reliability coefficients. © 2014 Wiley Periodicals, Inc.

  3. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    Sozer, Hasan; Tekinerdogan, Bedir; Aksit, Mehmet; Lemos, de Rogerio; Gacek, Cristina


    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.


    Z.-G. Zhou


    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  5. Spinal appearance questionnaire: factor analysis, scoring, reliability, and validity testing.

    Carreon, Leah Y; Sanders, James O; Polly, David W; Sucato, Daniel J; Parent, Stefan; Roy-Beaudry, Marjolaine; Hopkins, Jeffrey; McClung, Anna; Bratcher, Kelly R; Diamond, Beverly E


    Cross sectional. This study presents the factor analysis of the Spinal Appearance Questionnaire (SAQ) and its psychometric properties. Although the SAQ has been administered to a large sample of patients with adolescent idiopathic scoliosis (AIS) treated surgically, its psychometric properties have not been fully evaluated. This study presents the factor analysis and scoring of the SAQ and evaluates its psychometric properties. The SAQ and the Scoliosis Research Society-22 (SRS-22) were administered to AIS patients who were being observed, braced or scheduled for surgery. Standard demographic data and radiographic measures including Lenke type and curve magnitude were also collected. Of the 1802 patients, 83% were female; with a mean age of 14.8 years and mean initial Cobb angle of 55.8° (range, 0°-123°). From the 32 items of the SAQ, 15 loaded on two factors with consistent and significant correlations across all Lenke types. There is an Appearance (items 1-10) and an Expectations factor (items 12-15). Responses are summed giving a range of 5 to 50 for the Appearance domain and 5 to 20 for the Expectations domain. The Cronbach's α was 0.88 for both domains and Total score with a test-retest reliability of 0.81 for Appearance and 0.91 for Expectations. Correlations with major curve magnitude were higher for the SAQ Appearance and SAQ Total scores compared to correlations between the SRS Appearance and SRS Total scores. The SAQ and SRS-22 Scores were statistically significantly different in patients who were scheduled for surgery compared to those who were observed or braced. The SAQ is a valid measure of self-image in patients with AIS with greater correlation to curve magnitude than SRS Appearance and Total score. It also discriminates between patients who require surgery from those who do not.

  6. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.


    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  7. Reliability Modeling and Analysis of SCI Topological Network

    Hongzhe Xu


    Full Text Available The problem of reliability modeling on the Scalable Coherent Interface (SCI rings and topological network is studied. The reliability models of three SCI rings are developed and the factors which influence the reliability of SCI rings are studied. By calculating the shortest path matrix and the path quantity matrix of different types SCI network topology, the communication characteristics of SCI network are obtained. For the situations of the node-damage and edge-damage, the survivability of SCI topological network is studied.

  8. Reliability of muscle strength assessment in chronic post-stroke hemiparesis: a systematic review and meta-analysis.

    Rabelo, Michelle; Nunes, Guilherme S; da Costa Amante, Natália Menezes; de Noronha, Marcos; Fachin-Martins, Emerson


    Muscle weakness is the main cause of motor impairment among stroke survivors and is associated with reduced peak muscle torque. To systematically investigate and organize the evidence of the reliability of muscle strength evaluation measures in post-stroke survivors with chronic hemiparesis. Two assessors independently searched four electronic databases in January 2014 (Medline, Scielo, CINAHL, Embase). Inclusion criteria comprised studies on reliability on muscle strength assessment in adult post-stroke patients with chronic hemiparesis. We extracted outcomes from included studies about reliability data, measured by intraclass correlation coefficient (ICC) and/or similar. The meta-analyses were conducted only with isokinetic data. Of 450 articles, eight articles were included for this review. After quality analysis, two studies were considered of high quality. Five different joints were analyzed within the included studies (knee, hip, ankle, shoulder, and elbow). Their reliability results varying from low to very high reliability (ICCs from 0.48 to 0.99). Results of meta-analysis for knee extension varying from high to very high reliability (pooled ICCs from 0.89 to 0.97), for knee flexion varying from high to very high reliability (pooled ICCs from 0.84 to 0.91) and for ankle plantar flexion showed high reliability (pooled ICC = 0.85). Objective muscle strength assessment can be reliably used in lower and upper extremities in post-stroke patients with chronic hemiparesis.

  9. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)


    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  10. Reliability analysis of wind turbines exposed to dynamic loads

    Sørensen, John Dalsgaard


    . Therefore the turbine components should be designed to have sufficient reliability with respect to both extreme and fatigue loads also not be too costly (and safe). This paper presents models for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades...... the reliability of the structural components. Illustrative examples are presented considering uncertainty modeling and reliability assessment for structural wind turbine components exposed to extreme loads and fatigue, respectively.......Wind turbines are exposed to highly dynamic loads that cause fatigue and extreme load effects which are subject to significant uncertainties. Further, reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources...

  11. Operation of Reliability Analysis Center (FY85-87)


    environmental conditions at the time of the reported failure as well as the exact nature of the failure. 4 The diskette format (FMDR-21A) contains...based upon the reliability and maintainability standards and tasks delineated in NAC R&M-STD-ROO010 (Reliability Program Requirements Seleccion ). These...characteristics, environmental conditions at the time of the reported failure, and the exact nature of the failure, which has been categorized as follows

  12. reliability reliability


    The design variables for the design of the sla. The design ... The presence of uncertainty in the analysis and de of engineering .... however, for certain complex elements, the methods ..... Standard BS EN 1990, CEN, European Committee for.


    Zhao Jingyi; Zhuoru; Wang Yiqun


    According to the demand of high reliability of the primary cylinder of the hydraulic press,the reliability model of the primary cylinder is built after its reliability analysis.The stress of the primary cylinder is analyzed by finite element software-MARC,and the structure reliability of the cylinder based on stress-strength model is predicted,which would provide the reference to the design.

  14. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.


    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.

  15. Uncertainty analysis with reliability techniques of fluvial hydraulic simulations

    Oubennaceur, K.; Chokmani, K.; Nastev, M.


    Flood inundation models are commonly used to simulate hydraulic and floodplain inundation processes, prerequisite to successful floodplain management and preparation of appropriate flood risk mitigation plans. Selecting statistically significant ranges of the variables involved in the inundation modelling is crucial for the model performance. This involves various levels of uncertainty, which due to the cumulative nature can lead to considerable uncertainty in the final results. Therefore, in addition to the validation of the model results, there is a need for clear understanding and identifying sources of uncertainty and for measuring the model uncertainty. A reliability approach called Point-Estimate Method is presented to quantify uncertainty effects of the input data and to calculate the propagation of uncertainty in the inundation modelling process. The Point Estimate Method is a special case of numerical quadrature based on orthogonal polynomials. It allows to evaluate the low order of performance functions of independent random variables such the water depth. The variables considered in the analyses include elevation data, flow rate and Manning's coefficient n given with their own probability distributions. The approach is applied to a 45 km reach of the Richelieu River, Canada, between Rouses point and Fryers Rapids. The finite element hydrodynamic model H2D2 was used to solve the shallow water equations (SWE) and provide maps of expected water depths associated spatial distributions of standard deviations as a measure of uncertainty. The results indicate that for the simulated flow rates of 1113, 1206, and 1282, the uncertainties in water depths have a range of 25 cm, 30cm, and 60 cm, respectively. This kind of information is useful information for decision-making framework risk management in the context flood risk assessment.

  16. Suitability review of FMEA and reliability analysis for digital plant protection system and digital engineered safety features actuation system

    Kim, I. S.; Kim, T. K.; Kim, M. C.; Kim, B. S.; Hwang, S. W.; Ryu, K. C. [Hanyang Univ., Seoul (Korea, Republic of)


    Of the many items that should be checked out during a review stage of the licensing application for the I and C system of Ulchin 5 and 6 units, this report relates to a suitability review of the reliability analysis of Digital Plant Protection System (DPPS) and Digital Engineered Safety Features Actuation System (DESFAS). In the reliability analysis performed by the system designer, ABB-CE, fault tree analysis was used as the main methods along with Failure Modes and Effect Analysis (FMEA). However, the present regulatory technique dose not allow the system reliability analysis and its results to be appropriately evaluated. Hence, this study was carried out focusing on the following four items ; development of general review items by which to check the validity of a reliability analysis, and the subsequent review of suitability of the reliability analysis for Ulchin 5 and 6 DPPS and DESFAS L development of detailed review items by which to check the validity of an FMEA, and the subsequent review of suitability of the FMEA for Ulchin 5 and 6 DPPS and DESFAS ; development of detailed review items by which to check the validity of a fault tree analysis, and the subsequent review of suitability of the fault tree for Ulchin 5 and 6 DPPS and DESFAS ; an integrated review of the safety and reliability of the Ulchin 5 and 6 DPPS and DESFAS based on the results of the various reviews above and also of a reliability comparison between the digital systems and the comparable analog systems, i.e., and analog Plant Protection System (PPS) and and analog Engineered Safety Features Actuation System (ESFAS). According to the review mentioned above, the reliability analysis of Ulchin 5 and 6 DPPS and DESFAS generally satisfies the review requirements. However, some shortcomings of the analysis were identified in our review such that the assumed test periods for several equipment were not properly incorporated in the analysis, and failures of some equipment were not included in the

  17. Application of the Simulation Based Reliability Analysis on the LBB methodology

    Pečínka L.; Švrček M.


    Guidelines on how to demonstrate the existence of Leak Before Break (LBB) have been developed in many western countries. These guidelines, partly based on NUREG/CR-6765, define the steps that should be fulfilled to get a conservative assessment of LBB acceptability. As a complement and also to help identify the key parameters that influence the resulting leakage and failure probabilities, the application of Simulation Based Reliability Analysis is under development. The used methodology will ...

  18. Analysis methods for structure reliability of piping components

    Schimpfke, T.; Grebner, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Germany)


    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  19. Reliability Analysis for the Fatigue Limit State of the ASTRID Offshore Platform

    Vrouwenvelder, A.C.W.M.; Gostelie, E.M.


    A reliability analysis with respect to fatigue failure was performed for a concrete gravity platform designed for the Troll field. The reliability analysis was incorporated in the practical design-loop to gain more insight into the complex fatigue problem. In the analysis several parameters relating

  20. Application of Support Vector Machine to Reliability Analysis of Engine Systems

    Zhang Xinfeng


    Full Text Available Reliability analysis plays a very important role for assessing the performance and making maintenance plans of engine systems. This research presents a comparative study of the predictive performances of support vector machines (SVM , least square support vector machine (LSSVM and neural network time series models for forecasting failures and reliability in engine systems. Further, the reliability indexes of engine systems are computed by the weibull probability paper programmed with Matlab. The results shows that the probability distribution of the forecasting outcomes is consistent to the distribution of the actual data, which all follow weibull distribution and the predictions by SVM and LSSVM can provide accurate predictions of the characteristic life. So SVM and LSSVM are both another choice of engine system reliability analysis. Moreover, the predictive precise of the method based on LSSVM is higher than that of SVM. In small samples, the prediction by LSSVM will be more popular, because its compution cost is lower and the precise can be more satisfied.

  1. Reliability and Sensitivity Analysis of Transonic Flutter Using Improved Line Sampling Technique

    Song Shufang; Lu Zhenzhou; Zhang Weiwei; Ye Zhengyin


    The improved line sampling (LS) technique, an effective numerical simulation method, is employed to analyze the probabilistic characteristics and reliability sensitivity of flutter with random structural parameter in transonic flow. The improved LS technique is a novel methodology for reliability and sensitivity analysis of high dimensionality and low probability problem with implicit limit state function, and it does not require any approximating surrogate of the implicit limit state equation. The improved LS is used to estimate the flutter reliability and the sensitivity of a two-dimensional wing, in which some structural properties, such as frequency, parameters of gravity center and mass ratio, are considered as random variables. Computational fluid dynamics (CFD) based unsteady aerodynamic reduced order model (ROM) method is used to construct the aerodynamic state equations. Coupling structural state equations with aerodynamic state equations, the safety margin of flutter is founded by using the critical velocity of flutter. The results show that the improved LS technique can effectively decrease the computational cost in the random uncertainty analysis of flutter. The reliability sensitivity, defined by the partial derivative of the failure probability with respect to the distribution parameter of random variable, can help to identify the important parameters and guide the structural optimization design.

  2. Evaluating the safety risk of roadside features for rural two-lane roads using reliability analysis.

    Jalayer, Mohammad; Zhou, Huaguo


    The severity of roadway departure crashes mainly depends on the roadside features, including the sideslope, fixed-object density, offset from fixed objects, and shoulder width. Common engineering countermeasures to improve roadside safety include: cross section improvements, hazard removal or modification, and delineation. It is not always feasible to maintain an object-free and smooth roadside clear zone as recommended in design guidelines. Currently, clear zone width and sideslope are used to determine roadside hazard ratings (RHRs) to quantify the roadside safety of rural two-lane roadways on a seven-point pictorial scale. Since these two variables are continuous and can be treated as random, probabilistic analysis can be applied as an alternative method to address existing uncertainties. Specifically, using reliability analysis, it is possible to quantify roadside safety levels by treating the clear zone width and sideslope as two continuous, rather than discrete, variables. The objective of this manuscript is to present a new approach for defining the reliability index for measuring roadside safety on rural two-lane roads. To evaluate the proposed approach, we gathered five years (2009-2013) of Illinois run-off-road (ROR) crash data and identified the roadside features (i.e., clear zone widths and sideslopes) of 4500 300ft roadway segments. Based on the obtained results, we confirm that reliability indices can serve as indicators to gauge safety levels, such that the greater the reliability index value, the lower the ROR crash rate.

  3. Assessing validity and reliability of Resting Metabolic Rate in six gas analysis systems

    Cooper, Jamie A.; Watras, Abigail C.; O’Brien, Matthew J.; Luke, Amy; Dobratz, Jennifer R.; Earthman, Carrie P.; Schoeller, Dale A.


    The Deltatrac Metabolic Monitor (DTC), one of the most popular indirect calorimetry systems for measuring resting metabolic rate (RMR) in human subjects, is no longer being manufactured. This study compared five different gas analysis systems to the DTC. Resting metabolic rate was measured by the DTC and at least one other instrument at three study sites for a total of 38 participants. The five indirect calorimetry systems included: MedGraphics CPX Ultima, MedGem, Vmax Encore 29 System, TrueOne 2400, and Korr ReeVue. Validity was assessed using paired t-tests to compare means while reliability was assessed by using both paired t-tests and root mean square calculations with F tests for significance. Within-subject comparisons for validity of RMR revealed a significant difference between the DTC and Ultima. Bland-Altman plot analysis showed significant bias with increasing RMR values for the Korr and MedGem. Respiratory exchange ratio (RER) analysis showed a significant difference between the DTC and the Ultima and a trend for a difference with the Vmax (p = 0.09). Reliability assessment for RMR revealed that all instruments had a significantly larger coefficient of variation (CV) (ranging from 4.8% to 10.9%) for RMR compared to the 3.0 % CV for the DTC. Reliability assessment for RER data showed none of the instrument CV’s were significantly larger than the DTC CV. The results were quite disappointing, with none of the instruments equaling the within person reliability of the DTC. The TrueOne and Vmax were the most valid instruments in comparison with the DTC for both RMR and RER assessment. Further testing is needed to identify an instrument with the reliability and validity of the DTC. PMID:19103333

  4. Space Shuttle Rudder Speed Brake Actuator-A Case Study Probabilistic Fatigue Life and Reliability Analysis

    Oswald, Fred B.; Savage, Michael; Zaretsky, Erwin V.


    The U.S. Space Shuttle fleet was originally intended to have a life of 100 flights for each vehicle, lasting over a 10-year period, with minimal scheduled maintenance or inspection. The first space shuttle flight was that of the Space Shuttle Columbia (OV-102), launched April 12, 1981. The disaster that destroyed Columbia occurred on its 28th flight, February 1, 2003, nearly 22 years after its first launch. In order to minimize risk of losing another Space Shuttle, a probabilistic life and reliability analysis was conducted for the Space Shuttle rudder/speed brake actuators to determine the number of flights the actuators could sustain. A life and reliability assessment of the actuator gears was performed in two stages: a contact stress fatigue model and a gear tooth bending fatigue model. For the contact stress analysis, the Lundberg-Palmgren bearing life theory was expanded to include gear-surface pitting for the actuator as a system. The mission spectrum of the Space Shuttle rudder/speed brake actuator was combined into equivalent effective hinge moment loads including an actuator input preload for the contact stress fatigue and tooth bending fatigue models. Gear system reliabilities are reported for both models and their combination. Reliability of the actuator bearings was analyzed separately, based on data provided by the actuator manufacturer. As a result of the analysis, the reliability of one half of a single actuator was calculated to be 98.6 percent for 12 flights. Accordingly, each actuator was subsequently limited to 12 flights before removal from service in the Space Shuttle.

  5. Methods for communication-network reliability analysis - Probabilistic graph reduction

    Shooman, Andrew M.; Kershenbaum, Aaron

    The authors have designed and implemented a graph-reduction algorithm for computing the k-terminal reliability of an arbitrary network with possibly unreliable nodes. The two contributions of the present work are a version of the delta-y transformation for k-terminal reliability and an extension of Satyanarayana and Wood's polygon to chain transformations to handle graphs with imperfect vertices. The exact algorithm is faster than or equal to that of Satyanarayana and Wood and the simple algorithm without delta-y and polygon to chain transformations for every problem considered. The exact algorithm runs in linear time on series-parallel graphs and is faster than the above-stated algorithms for huge problems which run in exponential time. The approximate algorithms reduce the computation time for the network reliability problem by two to three orders of magnitude for large problems, while providing reasonably accurate answers in most cases.

  6. Reliability Analysis of Random Vibration Transmission Path Systems

    Wei Zhao


    Full Text Available The vibration transmission path systems are generally composed of the vibration source, the vibration transfer path, and the vibration receiving structure. The transfer path is the medium of the vibration transmission. Moreover, the randomness of transfer path influences the transfer reliability greatly. In this paper, based on the matrix calculus, the generalized second moment technique, and the stochastic finite element theory, the effective approach for the transfer reliability of vibration transfer path systems was provided. The transfer reliability of vibration transfer path system with uncertain path parameters including path mass and path stiffness was analyzed theoretically and computed numerically, and the correlated mathematical expressions were derived. Thus, it provides the theoretical foundation for the dynamic design of vibration systems in practical project, so that most random path parameters can be considered to solve the random problems for vibration transfer path systems, which can avoid the system resonance failure.

  7. Assessing the Reliability of Digitalized Cephalometric Analysis in Comparison with Manual Cephalometric Analysis

    Farooq, Mohammed Umar; Khan, Mohd. Asadullah; Imran, Shahid; Qureshi, Arshad; Ahmed, Syed Afroz; Kumar, Sujan; Rahman, Mohd. Aziz Ur


    Introduction For more than seven decades orthodontist used cephalometric analysis as one of the main diagnostic tools which can be performed manually or by software. The use of computers in treatment planning is expected to avoid errors and make it less time consuming with effective evaluation and high reproducibility. Aim This study was done to evaluate and compare the accuracy and reliability of cephalometric measurements between computerized method of direct digital radiographs and conventional tracing. Materials and Methods Digital and conventional hand tracing cephalometric analysis of 50 patients were done. Thirty anatomical landmarks were defined on each radiograph by a single investi-gator, 5 skeletal analysis (Steiner, Wits, Tweeds, McNamara, Rakosi Jarabaks) and 28 variables were calculated. Results The variables showed consistency between the two methods except for 1-NA, Y-axis and interincisal angle measurements which were higher in manual tracing and higher facial axis angle in digital tracing. Conclusion Most of the commonly used measurements were accurate except some measurements between the digital tracing with FACAD® and manual methods. The advantages of digital imaging such as enhancement, transmission, archiving and low radiation dosages makes it to be preferred over conventional method in daily use. PMID:27891451

  8. Efficient Approximate Method of Global Reliability Analysis for Offshore Platforms in the Ice Zone


    Ice load is the dominative load in the design of offshore platforms in the ice zone, and the extreme ice load is the key factor that affects the safety of platforms. The present paper studies the statistical properties of the global resistance and the extreme responses of the jacket platforms in Bohai Bay, considering the randomness of ice load, dead load, steel elastic modulus, yield strength and structural member dimensions. Then, based on the above results, an efficient approximate method of the global reliability analysis for the offshore platforms is proposed, which converts the implicit nonlinear performance function in the conventional reliability analysis to linear explicit one. Finally, numerical examples of JZ20-2 MSW, JZ20-2NW and JZ20-2 MUQ offshore jacket platforms in the Bohai Bay demonstrate the satisfying efficiency, accuracy and applicability of the proposed method.

  9. Technical information report: Plasma melter operation, reliability, and maintenance analysis

    Hendrickson, D.W. [ed.


    This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.

  10. Reliability modeling and analysis of smart power systems

    Karki, Rajesh; Verma, Ajit Kumar


    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  11. Embedded mechatronic systems 1 analysis of failures, predictive reliability

    El Hami, Abdelkhalak


    In operation, mechatronics embedded systems are stressed by loads of different causes: climate (temperature, humidity), vibration, electrical and electromagnetic. These stresses in components which induce failure mechanisms should be identified and modeled for better control. AUDACE is a collaborative project of the cluster Mov'eo that address issues specific to mechatronic reliability embedded systems. AUDACE means analyzing the causes of failure of components of mechatronic systems onboard. The goal of the project is to optimize the design of mechatronic devices by reliability. The projec


    Bing Xue,


    Full Text Available Laminated Veneer Lumber (LVL panels made from poplar (Populus ussuriensis Kom. and birch (Betula platyphylla Suk. veneers were tested for mechanical properties. The effects of the assembly pattern on the modulus of elasticity (MOE and modulus of rupture (MOR of the LVL with vertical load testing were investigated. Three analytical methods were used: composite material mechanics, computer simulation, and static testing. The reliability of the different LVL assembly patterns was assessed using the method of Monte-Carlo. The results showed that the theoretical and ANSYS analysis results of the LVL MOE and MOR were very close to those of the static test results, and the largest proportional error was not greater than 5%. The veneer amount was the same, but the strength and reliability of the LVL made of birch veneers on the top and bottom was much more than the LVL made of poplar veneers. Good assembly patterns can improve the utility value of wood.

  13. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin


    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.

  14. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    Matthew Bucknor


    Full Text Available Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general for the postulated transient event.

  15. Advanced reactor passive system reliability demonstration analysis for an external event

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin [Argonne National Laboratory, Argonne (United States)


    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.

  16. The extent of food waste generation across EU-27: different calculation methods and the reliability of their results.

    Bräutigam, Klaus-Rainer; Jörissen, Juliane; Priefer, Carmen


    The reduction of food waste is seen as an important societal issue with considerable ethical, ecological and economic implications. The European Commission aims at cutting down food waste to one-half by 2020. However, implementing effective prevention measures requires knowledge of the reasons and the scale of food waste generation along the food supply chain. The available data basis for Europe is very heterogeneous and doubts about its reliability are legitimate. This mini-review gives an overview of available data on food waste generation in EU-27 and discusses their reliability against the results of own model calculations. These calculations are based on a methodology developed on behalf of the Food and Agriculture Organization of the United Nations and provide data on food waste generation for each of the EU-27 member states, broken down to the individual stages of the food chain and differentiated by product groups. The analysis shows that the results differ significantly, depending on the data sources chosen and the assumptions made. Further research is much needed in order to improve the data stock, which builds the basis for the monitoring and management of food waste.

  17. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming


    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  18. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    Iskandar, Ismed; Satria Gondokaryono, Yudi


    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  19. Architecture-Based Reliability Analysis of Web Services

    Rahmani, Cobra Mariam


    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  20. Windfarm generation assessment for reliability analysis of power systems

    Negra, N.B.; Holmstrøm, O.; Bak-Jensen, B.;


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...

  1. Reliability analysis of common hazardous waste treatment processes

    Waters, R.D. [Vanderbilt Univ., Nashville, TN (United States)


    Five hazardous waste treatment processes are analyzed probabilistically using Monte Carlo simulation to elucidate the relationships between process safety factors and reliability levels. The treatment processes evaluated are packed tower aeration, reverse osmosis, activated sludge, upflow anaerobic sludge blanket, and activated carbon adsorption.

  2. Fiber Access Networks: Reliability Analysis and Swedish Broadband Market

    Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp

    Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.

  3. Statistical Analysis of Human Reliability of Armored Equipment

    LIU Wei-ping; CAO Wei-guo; REN Jing


    Human errors of seven types of armored equipment, which occur during the course of field test, are statistically analyzed. The human error-to-armored equipment failure ratio is obtained. The causes of human errors are analyzed. The distribution law of human errors is acquired. The ratio of human errors and human reliability index are also calculated.

  4. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Bruce Weaver


    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  5. Mapping Green Spaces in Bishkek—How Reliable can Spatial Analysis Be?

    Peter Hofmann


    Full Text Available Within urban areas, green spaces play a critically important role in the quality of life. They have remarkable impact on the local microclimate and the regional climate of the city. Quantifying the ‘greenness’ of urban areas allows comparing urban areas at several levels, as well as monitoring the evolution of green spaces in urban areas, thus serving as a tool for urban and developmental planning. Different categories of vegetation have different impacts on recreation potential and microclimate, as well as on the individual perception of green spaces. However, when quantifying the ‘greenness’ of urban areas the reliability of the underlying information is important in order to qualify analysis results. The reliability of geo-information derived from remote sensing data is usually assessed by ground truth validation or by comparison with other reference data. When applying methods of object based image analysis (OBIA and fuzzy classification, the degrees of fuzzy membership per object in general describe to what degree an object fits (prototypical class descriptions. Thus, analyzing the fuzzy membership degrees can contribute to the estimation of reliability and stability of classification results, even when no reference data are available. This paper presents an object based method using fuzzy class assignments to outline and classify three different classes of vegetation from GeoEye imagery. The classification result, its reliability and stability are evaluated using the reference-free parameters Best Classification Result and Classification Stability as introduced by Benz et al. in 2004 and implemented in the software package eCognition ( To demonstrate the application potentials of results a scenario for quantifying urban ‘greenness’ is presented.

  6. Tackling reliability and construct validity: the systematic development of a qualitative protocol for skill and incident analysis.

    Savage, Trevor Nicholas; McIntosh, Andrew Stuart


    It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.

  7. Geothermal industry employment: Survey results & analysis


    The Geothermal Energy Association (GEA) is ofteh asked about the socioeconomic and employment impact of the industry. Since available literature dealing with employment involved in the geothermal sector appeared relatively outdated, unduly focused on certain activities of the industry (e.g. operation and maintenance of geothermal power plants) or poorly reliable, GEA, in consultation with the DOE, decided to conduct a new employment survey to provide better answers to these questions. The main objective of this survey is to assess and characterize the current workforce involved in geothermal activities in the US. Several initiatives have therefore been undertaken to reach as many organizations involved in geothermal activities as possible and assess their current workforce. The first section of this document describes the methodology used to contact the companies involved in the geothermal sector. The second section presents the survey results and analyzes them. This analysis includes two major parts. The first part analyzes the survey responses, presents employment numbers that were captured and describes the major characteristics of the industry that have been identified. The second part of the analysis estimates the number of workers involved in companies that are active in the geothermal business but did not respond to the survey or could not be reached. Preliminary conclusions and the study limits and restrictions are then presented. The third section addresses the potential employment impact related to manufacturing and construction of new geothermal power facilities. Indirect and induced economic impacts related with such investment are also investigated.

  8. Reliability Analysis of Ice-Induced Fatigue and Damage in Offshore Engineering Structures


    - In Bohai Gulf, offshore and other installations have collapsed by sea ice due to the fatigue and fracture of the main supporting components in the ice environments. In this paper presented are some results on fatigue reliability of these structures in the Gulf by investigating the distributions of ice parameters such as its floating direction and speed, sheet thickness, compressive strength, ice forces on the structures, and hot spot stress in the structure. The low temperature, ice breaking modes and component fatigue failure modes are also taken into account in the analysis of the fatigue reliability of the offshore structures experiencing both random ice loading and low temperatures. The results could be applied to the design and operation of offshore platforms in the Bohai Gulf.

  9. Probability maps as a measure of reliability for indivisibility analysis

    Joksić Dušan


    Full Text Available Digital terrain models (DTMs represent segments of spatial data bases related to presentation of terrain features and landforms. Square grid elevation models (DEMs have emerged as the most widely used structure during the past decade because of their simplicity and simple computer implementation. They have become an important segment of Topographic Information Systems (TIS, storing natural and artificial landscape in forms of digital models. This kind of a data structure is especially suitable for morph metric terrain evaluation and analysis, which is very important in environmental and urban planning and Earth surface modeling applications. One of the most often used functionalities of Geographical information systems software packages is indivisibility or view shed analysis of terrain. Indivisibility determination from analog topographic maps may be very exhausting, because of the large number of profiles that have to be extracted and compared. Terrain representation in form of the DEMs databases facilitates this task. This paper describes simple algorithm for terrain view shed analysis by using DEMs database structures, taking into consideration the influence of uncertainties of such data to the results obtained thus far. The concept of probability maps is introduced as a mean for evaluation of results, and is presented as thematic display.

  10. Reliability and Security Analysis on Two-Cell Dynamic Redundant System

    Hongsheng Su


    Full Text Available Based on analysis on reliability and security on three types of two-cell dynamic redundant systems which has been widely applied in modern railway signal system, whose isomorphic Markov model was established in this paper. During modeling several important factors, including common-cause failure, coverage of diagnostic systems, online maintainability, and periodic inspection maintenance, and as well as many failure modes, were considered, which made the established model more credible. Through analysis and calculation on reliability and security indexes of the three types of two-module dynamic redundant structures, the paper acquires a significant conclusion, i.e., the safety and reliability of the kind of structure possesses an upper limit, and can not be inordinately improved through the hardware and software comparison methods under the failure and repairing rate fixed. Finally, the paper performs the simulation investigations, and compares the calculation results of the three redundant systems, and analysis each advantages and disadvantages, and gives out each application scope, which provides a theoretical technical support for the railway signal equipments selection.

  11. A new approach for interexaminer reliability data analysis on dental caries calibration

    Andréa Videira Assaf


    Full Text Available Objectives: a to evaluate the interexaminer reliability in caries detection considering different diagnostic thresholds and b to indicate, by using Kappa statistics, the best way of measuring interexaminer agreement during the calibration process in dental caries surveys. Methods: Eleven dentists participated in the initial training, which was divided into theoretical discussions and practical activities, and calibration exercises, performed at baseline, 3 and 6 months after the initial training. For the examinations of 6-7-year-old schoolchildren, the World Health Organization (WHO recommendations were followed and different diagnostic thresholds were used: WHO (decayed/missing/filled teeth - DMFT index and WHO + IL (initial lesion diagnostic thresholds. The interexaminer reliability was calculated by Kappa statistics, according to WHO and WHO+IL thresholds considering: a the entire dentition; b upper/lower jaws; c sextants; d each tooth individually. Results: Interexaminer reliability was high for both diagnostic thresholds; nevertheless, it decreased in all calibration sections when considering teeth individually. Conclusion: The interexaminer reliability was possible during the period of 6 months, under both caries diagnosis thresholds. However, great disagreement was observed for posterior teeth, especially using the WHO+IL criteria. Analysis considering dental elements individually was the best way of detecting interexaminer disagreement during the calibration sections.

  12. Reliability analysis of the objective structured clinical examination using generalizability theory

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián


    Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements. PMID:27543188

  13. Reliability analysis of the objective structured clinical examination using generalizability theory

    Juan Andrés Trejo-Mejía


    Full Text Available Background: The objective structured clinical examination (OSCE is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods: An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results: The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions: Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  14. Reliability Analysis of Aircraft Condition Monitoring Network Using an Enhanced BDD Algorithm

    ZHAO Changxiao; CHEN Yao; WANG Hailiang; XIONG Huagang


    The aircraft condition monitoring network is responsible for collecting the status of each component in aircraft.The reliability of this network has a significant effect on safety of the aircraft.The aircraft condition monitoring network works in a real-time manner that all the data should be transmitted within the deadline to ensure that the control center makes proper decision in time.Only the connectedness between the source node and destination cannot guarantee the data to be transmitted in time.In this paper,we take the time deadline into account and build the task-based reliability model.The binary decision diagram (BDD),which has the merit of efficiency in computing and storage space,is introduced when calculating the reliability of the network and addressing the essential variable.A case is analyzed using the algorithm proposed in this paper.The experimental results show that our method is efficient and proper for the reliability analysis of the real-time network.

  15. Automated migration analysis based on cell texture: method & reliability

    Chittenden Thomas W


    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  16. Preventive Replacement Decisions for Dragline Components Using Reliability Analysis

    Nuray Demirel


    Full Text Available Reliability-based maintenance policies allow qualitative and quantitative evaluation of system downtimes via revealing main causes of breakdowns and discussing required preventive activities against failures. Application of preventive maintenance is especially important for mining machineries since production is highly affected from machinery breakdowns. Overburden stripping operations are one of the integral parts in surface coal mine productions. Draglines are extensively utilized in overburden stripping operations and they achieve earthmoving activities with bucket capacities up to 168 m3. The massive structure and operational severity of these machines increase the importance of performance awareness for individual working components. Research on draglines is rarely observed in the literature and maintenance studies for these earthmovers have been generally ignored. On this basis, this paper offered a comprehensive reliability assessment for two draglines currently operating in the Tunçbilek coal mine and discussed preventive replacement for wear-out components of the draglines considering cost factors.

  17. Reliability Analysis and Standardization of Spacecraft Command Generation Processes

    Meshkat, Leila; Grenander, Sven; Evensen, Ken


    center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.

  18. Analysis on Operation Reliability of Generating Units in 2005

    Zuo Xiaowen; Chu Xue


    @@ The weighted average equivalent availability factor of thermal power units in 2005 was 92.34%, an increase of 0.64 percentage points as compared to that in 2004. The average equivalent availability factor in 2005 was 92.22%, a decrease of 0.95 percentage points as compared to that in 2004. The nationwide operation reliability of generating units in 2005 was analyzed completely in this paper.

  19. Reliability importance analysis of Markovian systems at steady state using perturbation analysis

    Phuc Do Van [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France); Barros, Anne [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)], E-mail:; Berenguer, Christophe [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)


    Sensitivity analysis has been primarily defined for static systems, i.e. systems described by combinatorial reliability models (fault or event trees). Several structural and probabilistic measures have been proposed to assess the components importance. For dynamic systems including inter-component and functional dependencies (cold spare, shared load, shared resources, etc.), and described by Markov models or, more generally, by discrete events dynamic systems models, the problem of sensitivity analysis remains widely open. In this paper, the perturbation method is used to estimate an importance factor, called multi-directional sensitivity measure, in the framework of Markovian systems. Some numerical examples are introduced to show why this method offers a promising tool for steady-state sensitivity analysis of Markov processes in reliability studies.

  20. A New 3-Dimensional Dynamic Quantitative Analysis System of Facial Motion: An Establishment and Reliability Test

    Feng, Guodong; Zhao, Yang; Tian, Xu; Gao, Zhiqiang


    This study aimed to establish a 3-dimensional dynamic quantitative facial motion analysis system, and then determine its accuracy and test-retest reliability. The system could automatically reconstruct the motion of the observational points. Standardized T-shaped rod and L-shaped rods were used to evaluate the static and dynamic accuracy of the system. Nineteen healthy volunteers were recruited to test the reliability of the system. The average static distance error measurement was 0.19 mm, and the average angular error was 0.29°. The measuring results decreased with the increase of distance between the cameras and objects, 80 cm of which was considered to be optimal. It took only 58 seconds to perform the full facial measurement process. The average intra-class correlation coefficient for distance measurement and angular measurement was 0.973 and 0.794 respectively. The results demonstrated that we successfully established a practical 3-dimensional dynamic quantitative analysis system that is accurate and reliable enough to meet both clinical and research needs. PMID:25390881

  1. Stress and Reliability Analysis of a Metal-Ceramic Dental Crown

    Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.


    Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.

  2. Structural characterization of genomes by large scale sequence-structure threading: application of reliability analysis in structural genomics

    Brunham Robert C


    Full Text Available Abstract Background We establish that the occurrence of protein folds among genomes can be accurately described with a Weibull function. Systems which exhibit Weibull character can be interpreted with reliability theory commonly used in engineering analysis. For instance, Weibull distributions are widely used in reliability, maintainability and safety work to model time-to-failure of mechanical devices, mechanisms, building constructions and equipment. Results We have found that the Weibull function describes protein fold distribution within and among genomes more accurately than conventional power functions which have been used in a number of structural genomic studies reported to date. It has also been found that the Weibull reliability parameter β for protein fold distributions varies between genomes and may reflect differences in rates of gene duplication in evolutionary history of organisms. Conclusions The results of this work demonstrate that reliability analysis can provide useful insights and testable predictions in the fields of comparative and structural genomics.

  3. Analysis of Syetem Reliability in Manufacturing Cell Based on Triangular Fuzzy Number

    ZHANG Caibo; HAN Botang; SUN Changsen; XU Chunjie


    Due to lacking of test-data and field-data in reliability research during the design stage of manufacturing cell system. The degree of manufacturing cell system reliability research is increased. In order to deal with the deficient data and the uncertainty occurred from analysis and judgment, the paper discussed a method for studying reliability of manufacturing cell system through the analysis of fuzzy fault tree, which was based on triangular fuzzy number. At last, calculation case indicated that it would have great significance for ascertaining reliability index, maintenance and establishing keeping strategy towards manufacturing cell system.

  4. Reliability Analysis of Repairable Systems Using Stochastic Point Processes

    TAN Fu-rong; JIANG Zhi-bin; BAI Tong-shuo


    In order to analyze the failure data from repairable systems, the homogeneous Poisson process(HPP) is usually used. In general, HPP cannot be applied to analyze the entire life cycle of a complex, re-pairable system because the rate of occurrence of failures (ROCOF) of the system changes over time rather thanremains stable. However, from a practical point of view, it is always preferred to apply the simplest methodto address problems and to obtain useful practical results. Therefore, we attempted to use the HPP model toanalyze the failure data from real repairable systems. A graphic method and the Laplace test were also usedin the analysis. Results of numerical applications show that the HPP model may be a useful tool for the entirelife cycle of repairable systems.

  5. Interpretation of correlation analysis results

    Kılıç, Selim


    Correlation analysis is used to quantify the degree of linear association between two variables. Correlation coefficient is showed as “r” and it may have values between (-) 1 and (+)1. The symbols (-) or (+) in front of “r coefficient” show the direction of correlation. The direction of association does not affect the strength of association. A “ r coefficient” which is equal or greater than 0.70 is accepted as a good association. Correlation coeefficient only remarks the strength of associat...

  6. Acquisition and statistical analysis of reliability data for I and C parts in plant protection system

    Lim, T. J.; Byun, S. S.; Han, S. H.; Lee, H. J.; Lim, J. S.; Oh, S. J.; Park, K. Y.; Song, H. S. [Soongsil Univ., Seoul (Korea)


    This project has been performed in order to construct I and C part reliability databases for detailed analysis of plant protection system and to develop a methodology for analysing trip set point drifts. Reliability database for the I and C parts of plant protection system is required to perform the detailed analysis. First, we have developed an electronic part reliability prediction code based on MIL-HDBK-217F. Then we have collected generic reliability data for the I and C parts in plant protection system. Statistical analysis procedure has been developed to process the data. Then the generic reliability database has been constructed. We have also collected plant specific reliability data for the I and C parts in plant protection system for YGN 3,4 and UCN 3,4 units. Plant specific reliability database for I and C parts has been developed by the Bayesian procedure. We have also developed an statistical analysis procedure for set point drift, and performed analysis of drift effects for trip set point. The basis for the detailed analysis can be provided from the reliability database for the PPS I and C parts. The safety of the KSNP and succeeding NPPs can be proved by reducing the uncertainty of PSA. Economic and efficient operation of NPP can be possible by optimizing the test period to reduce utility's burden. 14 refs., 215 figs., 137 tabs. (Author)

  7. Quasi-Monte Carlo Simulation-Based SFEM for Slope Reliability Analysis

    Yu Yuzhen; Xie Liquan; Zhang Bingyin


    Considering the stochastic spatial variation of geotechnical parameters over the slope, a Stochastic Finite Element Method (SFEM) is established based on the combination of the Shear Strength Reduction (SSR) concept and quasi-Monte Carlo simulation. The shear strength reduction FEM is superior to the slice method based on the limit equilibrium theory in many ways, so it will be more powerful to assess the reliability of global slope stability when combined with probability theory. To illustrate the performance of the proposed method, it is applied to an example of simple slope. The results of simulation show that the proposed method is effective to perform the reliability analysis of global slope stability without presupposing a potential slip surface.

  8. Non-probabilistic fuzzy reliability analysis of pile foundation stability by interval theory


    Randomness and fuzziness are among the attributes of the influential factors for stability assessment of pile foundation.According to these two characteristics, the triangular fuzzy number analysis approach was introduced to determine the probability-distributed function of mechanical parameters. Then the functional function of reliability analysis was constructed based on the study of bearing mechanism of pile foundation, and the way to calculate interval values of the functional function was developed by using improved interval-truncation approach and operation rules of interval numbers. Afterwards, the non-probabilistic fuzzy reliability analysis method was applied to assessing the pile foundation, from which a method was presented for nonprobabilistic fuzzy reliability analysis of pile foundation stability by interval theory. Finally, the probability distribution curve of nonprobabilistic fuzzy reliability indexes of practical pile foundation was concluded. Its failure possibility is 0.91%, which shows that the pile foundation is stable and reliable.

  9. Structural Reliability Analysis for Implicit Performance with Legendre Orthogonal Neural Network Method

    Lirong Sha; Tongyu Wang


    In order to evaluate the failure probability of a complicated structure, the structural responses usually need to be estimated by some numerical analysis methods such as finite element method ( FEM) . The response surface method ( RSM) can be used to reduce the computational effort required for reliability analysis when the performance functions are implicit. However, the conventional RSM is time⁃consuming or cumbersome if the number of random variables is large. This paper proposes a Legendre orthogonal neural network ( LONN)⁃based RSM to estimate the structural reliability. In this method, the relationship between the random variables and structural responses is established by a LONN model. Then the LONN model is connected to a reliability analysis method, i.e. first⁃order reliability methods (FORM) to calculate the failure probability of the structure. Numerical examples show that the proposed approach is applicable to structural reliability analysis, as well as the structure with implicit performance functions.

  10. Interrater reliability of schizoaffective disorder compared with schizophrenia, bipolar disorder, and unipolar depression - A systematic review and meta-analysis.

    Santelmann, Hanno; Franklin, Jeremy; Bußhoff, Jana; Baethge, Christopher


    Schizoaffective disorder is a common diagnosis in clinical practice but its nosological status has been subject to debate ever since it was conceptualized. Although it is key that diagnostic reliability is sufficient, schizoaffective disorder has been reported to have low interrater reliability. Evidence based on systematic review and meta-analysis methods, however, is lacking. Using a highly sensitive literature search in Medline, Embase, and PsycInfo we identified studies measuring the interrater reliability of schizoaffective disorder in comparison to schizophrenia, bipolar disorder, and unipolar disorder. Out of 4126 records screened we included 25 studies reporting on 7912 patients diagnosed by different raters. The interrater reliability of schizoaffective disorder was moderate (meta-analytic estimate of Cohen's kappa 0.57 [95% CI: 0.41-0.73]), and substantially lower than that of its main differential diagnoses (difference in kappa between 0.22 and 0.19). Although there was considerable heterogeneity, analyses revealed that the interrater reliability of schizoaffective disorder was consistently lower in the overwhelming majority of studies. The results remained robust in subgroup and sensitivity analyses (e.g., diagnostic manual used) as well as in meta-regressions (e.g., publication year) and analyses of publication bias. Clinically, the results highlight the particular importance of diagnostic re-evaluation in patients diagnosed with schizoaffective disorder. They also quantify a widely held clinical impression of lower interrater reliability and agree with earlier meta-analysis reporting low test-retest reliability.

  11. The European COPHES/DEMOCOPHES project: towards transnational comparability and reliability of human biomonitoring results.

    Schindler, Birgit Karin; Esteban, Marta; Koch, Holger Martin; Castano, Argelia; Koslitz, Stephan; Cañas, Ana; Casteleyn, Ludwine; Kolossa-Gehring, Marike; Schwedler, Gerda; Schoeters, Greet; Hond, Elly Den; Sepai, Ovnair; Exley, Karen; Bloemen, Louis; Horvat, Milena; Knudsen, Lisbeth E; Joas, Anke; Joas, Reinhard; Biot, Pierre; Aerts, Dominique; Lopez, Ana; Huetos, Olga; Katsonouri, Andromachi; Maurer-Chronakis, Katja; Kasparova, Lucie; Vrbík, Karel; Rudnai, Peter; Naray, Miklos; Guignard, Cedric; Fischer, Marc E; Ligocka, Danuta; Janasik, Beata; Reis, M Fátima; Namorado, Sónia; Pop, Cristian; Dumitrascu, Irina; Halzlova, Katarina; Fabianova, Eleonora; Mazej, Darja; Tratnik, Janja Snoj; Berglund, Marika; Jönsson, Bo; Lehmann, Andrea; Crettaz, Pierre; Frederiksen, Hanne; Nielsen, Flemming; McGrath, Helena; Nesbitt, Ian; De Cremer, Koen; Vanermen, Guido; Koppen, Gudrun; Wilhelm, Michael; Becker, Kerstin; Angerer, Jürgen


    between 18.9 and 45.3% for the phthalate metabolites. Plausibility control of the HBM results of all participating countries disclosed analytical shortcomings in the determination of Cd when using certain ICP/MS methods. Results were corrected by reanalyzes. The COPHES/DEMOCOPHES project for the first time succeeded in performing a harmonized pan-European HBM project. All data raised have to be regarded as utmost reliable according to the highest international state of the art, since highly renowned laboratories functioned as reference laboratories. The procedure described here, that has shown its success, can be used as a blueprint for future transnational, multicentre HBM projects.

  12. Probabilistic Structural Analysis and Reliability Using NESSUS With Implemented Material Strength Degradation Model

    Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)


    This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in

  13. Reliability Analysis of Timber Structures through NDT Data Upgrading

    Sousa, Hélder; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning

    for reliability calculation. In chapter 4, updating methods are conceptualized and defined. Special attention is drawn upon Bayesian methods and its implementation. Also a topic for updating based in inspection of deterioration is provided. State of the art definitions and proposed measurement indices......The first part of this document presents, in chapter 2, a description of timber characteristics and common used NDT and MDT for timber elements. Stochastic models for timber properties and damage accumulation models are also referred. According to timber’s properties a framework is proposed...

  14. A disjoint algorithm for seismic reliability analysis of lifeline networks


    The algorithm is based on constructing a disjoin kg t set of the minimal paths in a network system. In this paper,cubic notation was used to describe the logic function of a network in a well-balanced state, and then the sharp-product operation was used to construct the disjoint minimal path set of the network. A computer program has been developed, and when combined with decomposition technology, the reliability of a general lifeline network can be effectively and automatically calculated.

  15. Reliability and maintenance analysis of the CERN PS booster

    Staff, P S B


    The PS Booster Synchrotron being a complex accelerator with four superposed rings and substantial additional equipment for beam splitting and recombination, doubts were expressed at the time of project authorization as to its likely operational reliability. For 1975 and 1976, the average down time was 3.2% (at least one ring off) or 1.5% (all four rings off). The items analysed are: operational record, design features, maintenance, spare parts policy, operating temperature, effects of thunderstorms, fault diagnostics, role of operations staff and action by experts. (15 refs).

  16. The Reliability of Results from National Tests, Public Examinations, and Vocational Qualifications in England

    He, Qingping; Opposs, Dennis


    National tests, public examinations, and vocational qualifications in England are used for a variety of purposes, including the certification of individual learners in different subject areas and the accountability of individual professionals and institutions. However, there has been ongoing debate about the reliability and validity of their…

  17. Automated Energy Distribution and Reliability System: Validation Integration - Results of Future Architecture Implementation

    Buche, D. L.


    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.

  18. Using wavefront coding technique as an optical encryption system: reliability analysis and vulnerabilities assessment

    Konnik, Mikhail V.


    Wavefront coding paradigm can be used not only for compensation of aberrations and depth-of-field improvement but also for an optical encryption. An optical convolution of the image with the PSF occurs when a diffractive optical element (DOE) with a known point spread function (PSF) is placed in the optical path. In this case, an optically encoded image is registered instead of the true image. Decoding of the registered image can be performed using standard digital deconvolution methods. In such class of optical-digital systems, the PSF of the DOE is used as an encryption key. Therefore, a reliability and cryptographic resistance of such an encryption method depends on the size and complexity of the PSF used for optical encoding. This paper gives a preliminary analysis on reliability and possible vulnerabilities of such an encryption method. Experimental results on brute-force attack on the optically encrypted images are presented. Reliability estimation of optical coding based on wavefront coding paradigm is evaluated. An analysis of possible vulnerabilities is provided.

  19. Practical applications of age-dependent reliability models and analysis of operational data

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L


    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  20. Using a Hybrid Cost-FMEA Analysis for Wind Turbine Reliability Analysis

    Nacef Tazi


    Full Text Available Failure mode and effects analysis (FMEA has been proven to be an effective methodology to improve system design reliability. However, the standard approach reveals some weaknesses when applied to wind turbine systems. The conventional criticality assessment method has been criticized as having many limitations such as the weighting of severity and detection factors. In this paper, we aim to overcome these drawbacks and develop a hybrid cost-FMEA by integrating cost factors to assess the criticality, these costs vary from replacement costs to expected failure costs. Then, a quantitative comparative study is carried out to point out average failure rate, main cause of failure, expected failure costs and failure detection techniques. A special reliability analysis of gearbox and rotor-blades are presented.

  1. Statistics and Analysis on Reliability of HVDC Transmission Systems of SGCC



    Reliability level of HVDC power transmission systems becomes an important factor impacting the entire power grid. The author analyzes the reliability of HVDC power transmission systems owned by SGCC since 2003 in respect of forced outage times, forced energy unavailability, scheduled energy unavailability and energy utilization efficiency. The results show that the reliability level of HVDC power transmission systems owned by SGCC is improving. By analyzing different reliability indices of HVDC power transmission system, the maximum asset benefits of power grid can be achieved through building a scientific and reasonable reliability evaluation system.

  2. RELAP5/MOD3.3 Best Estimate Analyses for Human Reliability Analysis

    Andrej Prošek


    Full Text Available To estimate the success criteria time windows of operator actions the conservative approach was used in the conventional probabilistic safety assessment (PSA. The current PSA standard recommends the use of best-estimate codes. The purpose of the study was to estimate the operator action success criteria time windows in scenarios in which the human actions are supplement to safety systems actuations, needed for updated human reliability analysis (HRA. For calculations the RELAP5/MOD3.3 best estimate thermal-hydraulic computer code and the qualified RELAP5 input model representing a two-loop pressurized water reactor, Westinghouse type, were used. The results of deterministic safety analysis were examined what is the latest time to perform the operator action and still satisfy the safety criteria. The results showed that uncertainty analysis of realistic calculation in general is not needed for human reliability analysis when additional time is available and/or the event is not significant contributor to the risk.

  3. Fuzzy Reliability Analysis for Seabed Oil-Gas Pipeline Networks Under Earthquakes

    刘震; 潘斌


    The seabed oil-gas pipeline network is simplified to a network w i th stochastic edge-weight by means of the fuzzy graphics theory. With the help o f network analysis, fuzzy mathematics, and stochastic theory, the problem of rel iability analysis for the seabed oil-gas pipeline network under earthquakes is t ransformed into the calculation of the transitive closure of fuzzy matrix of the stochastic fuzzy network. In classical network reliability analysis, the node i s supposed to be non-invalidated; in this paper, this premise is modified by in t roducing a disposal method which has taken the possible invalidated node into a ccount. A good result is obtained by use of the Monte Carlo simulation analysis.

  4. A New Method for System Reliability Analysis of Tailings Dam Stability

    Liu, X.; Tang, H.; Xiong, C.; Ni, W.


    For the purpose of stability evaluation, a tailings dam can be considered as an artificial slope made of special soil materials which mainly come from mine tailings. As a particular engineering project, a tailings dam generally has experienced multi-loop hydraulic sediments as well as a long-term consolidation in the process of construction. The characteristics of sediment and consolidation result in a unique distribution of the soil layers with significant uncertainties, which come from both nature development and various human activities, and thus cause the discrete and the variability of the physical-mechanical properties dramatically greater than the natural geo-materials. Therefore, the location of critical slip surface (CSS) of the dam usually presents a notable drift. So, it means that the reliability evaluation task for a tailings dam is a system reliability problem indeed. Unfortunately, the previous research of reliability of tailings dam was mainly confined to the limit equilibrium method (LEM), which has three obvious drawbacks. First, it just focused on the variability along the slip surface rather than the whole space of the dam. Second, a fixed CSS, instead of variable one, was concerned in most cases. Third, the shape of the CSS was usually simplified to a circular. The present paper tried to construct a new reliability analysis model combined with several advanced techniques involving finite difference method (FDM), Monte Carlo simulation (MCS), support vector machine (SVM) and particle swarm optimization (PSO). The new framework was consisted of four modules. The first one is the limit equilibrium finite difference mode, which employed the FLAC3D code to generate stress fields and then used PSO algorithm to search the location of CSS and corresponding minimum factor of safety (FOS). The most value of this module was that each realization of stress field would lead to a particular CSS and its FOS. In other words, the consideration of the drift of

  5. Reliability and error analysis on xenon/CT CBF

    Zhang, Z. [Diversified Diagnostic Products, Inc., Houston, TX (United States)


    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  6. Methodology for reliability allocation based on fault tree analysis and dualistic contrast

    TONG Lili; CAO Xuewu


    Reliability allocation is a difficult multi-objective optimization problem.This paper presents a methodology for reliability allocation that can be applied to determine the reliability characteristics of reactor systems or subsystems.The dualistic contrast,known as one of the most powerful tools for optimization problems,is applied to the reliability allocation model of a typical system in this article.And the fault tree analysis,deemed to be one of the effective methods of reliability analysis,is also adopted.Thus a failure rate allocation model based on the fault tree analysis and dualistic contrast is achieved.An application on the emergency diesel generator in the nuclear power plant is given to illustrate the proposed method.

  7. Copula-Based Slope Reliability Analysis Using the Failure Domain Defined by the g-Line

    Xiaoliang Xu


    Full Text Available The estimation of the cross-correlation of shear strength parameters (i.e., cohesion and internal friction angle and the subsequent determination of the probability of failure have long been challenges in slope reliability analysis. Here, a copula-based approach is proposed to calculate the probability of failure by integrating the copula-based joint probability density function (PDF on the slope failure domain delimited with the g-line. Here, copulas are used to construct the joint PDF of shear strength parameters with specific marginal distributions and correlation structure. In the paper a failure (limit state function approach is applied to investigate a system characterized by a homogeneous slope. The results show that the values obtained by using the failure function approach are similar to those calculated by means of conventional methods, such as the first-order reliability method (FORM and Monte Carlo simulations (MC. In addition, an entropy weight (EW copula is proposed to address the discrepancies of the results calculated by different copulas to avoid over- or underestimating the slope reliability.

  8. Reliability of sprinkler systems. Exploration and analysis of data from nuclear and non-nuclear installations

    Roenty, V.; Keski-Rahkonen, O.; Hassinen, J.P. [VTT Building and Transport, Espoo (Finland)


    Sprinkler systems are an important part of fire safety of nuclear installations. As a part of effort to make fire-PSA of our utilities more quantitative a literature survey from open sources worldwide of available reliability data on sprinkler systems was carried out. Since the result of the survey was rather poor quantitatively, it was decided to mine available original Finnish nuclear and non-nuclear data, since nuclear power plants present a rather small device population. Sprinklers are becoming a key element for the fire safety in modern, open non-nuclear buildings. Therefore, the study included both nuclear power plants and non-nuclear buildings protected by sprinkler installations. Data needed for estimating of reliability of sprinkler systems were collected from available sources in Finnish nuclear and non-nuclear installations. Population sizes on sprinkler system installations and components therein as well as covered floor areas were counted individually from Finnish nuclear power plants. From non-nuclear installations corresponding data were estimated by counting relevant things from drawings of 102 buildings, and plotting from that sample needed probability distributions. The total populations of sprinkler systems and components were compiled based on available direct data and these distributions. From nuclear power plants electronic maintenance reports were obtained, observed failures and other reliability relevant data were selected, classified according to failure severity, and stored on spreadsheets for further analysis. A short summary of failures was made, which was hampered by a small sample size. From non-nuclear buildings inspection statistics from years 1985.1997 were surveyed, and observed failures were classified and stored on spreadsheets. Finally, a reliability model is proposed based on earlier formal work, and failure frequencies obtained by preliminary data analysis of this work. For a model utilising available information in the non

  9. Reliability of segmental accelerations measured using a new wireless gait analysis system.

    Kavanagh, Justin J; Morrison, Steven; James, Daniel A; Barrett, Rod


    The purpose of this study was to determine the inter- and intra-examiner reliability, and stride-to-stride reliability, of an accelerometer-based gait analysis system which measured 3D accelerations of the upper and lower body during self-selected slow, preferred and fast walking speeds. Eight subjects attended two testing sessions in which accelerometers were attached to the head, neck, lower trunk, and right shank. In the initial testing session, two different examiners attached the accelerometers and performed the same testing procedures. A single examiner repeated the procedure in a subsequent testing session. All data were collected using a new wireless gait analysis system, which features near real-time data transmission via a Bluetooth network. Reliability for each testing condition (4 locations, 3 directions, 3 speeds) was quantified using a waveform similarity statistic known as the coefficient of multiple determination (CMD). CMD's ranged from 0.60 to 0.98 across all test conditions and were not significantly different for inter-examiner (0.86), intra-examiner (0.87), and stride-to-stride reliability (0.86). The highest repeatability for the effect of location, direction and walking speed were for the shank segment (0.94), the vertical direction (0.91) and the fast walking speed (0.91), respectively. Overall, these results indicate that a high degree of waveform repeatability was obtained using a new gait system under test-retest conditions involving single and dual examiners. Furthermore, differences in acceleration waveform repeatability associated with the reapplication of accelerometers were small in relation to normal motor variability.

  10. Reliablity analysis of gravity dams by response surface method

    Humar, Nina; Kryžanowski, Andrej; Brilly, Mitja; Schnabl, Simon


    A dam failure is one of the most important problems in dam industry. Since the mechanical behavior of dams is usually a complex phenomenon existing classical mathematical models are generally insufficient to adequately predict the dam failure and thus the safety of dams. Therefore, numerical reliability methods are often used to model such a complex mechanical phenomena. Thus, the main purpose of the present paper is to present the response surface method as a powerful mathematical tool used to study and foresee the dam safety considering a set of collected monitoring data. The derived mathematical model is applied to a case study, the Moste dam, which is the highest concrete gravity dam in Slovenia. Based on the derived model, the ambient/state variables are correlated with the dam deformation in order to gain a forecasting tool able to define the critical thresholds for dam management.

  11. [A systematic social observation tool: methods and results of inter-rater reliability].

    Freitas, Eulilian Dias de; Camargos, Vitor Passos; Xavier, César Coelho; Caiaffa, Waleska Teixeira; Proietti, Fernando Augusto


    Systematic social observation has been used as a health research methodology for collecting information from the neighborhood physical and social environment. The objectives of this article were to describe the operationalization of direct observation of the physical and social environment in urban areas and to evaluate the instrument's reliability. The systematic social observation instrument was designed to collect information in several domains. A total of 1,306 street segments belonging to 149 different neighborhoods in Belo Horizonte, Minas Gerais, Brazil, were observed. For the reliability study, 149 segments (1 per neighborhood) were re-audited, and Fleiss kappa was used to access inter-rater agreement. Mean agreement was 0.57 (SD = 0.24); 53% had substantial or almost perfect agreement, and 20.4%, moderate agreement. The instrument appears to be appropriate for observing neighborhood characteristics that are not time-dependent, especially urban services, property characterization, pedestrian environment, and security.

  12. Validity and reliability of patient reported outcomes used in Psoriasis: results from two randomized clinical trials

    Koo John; Thompson Christine; Stone Stephen P; Bresnahan Brian W; Shikiar Richard; Revicki Dennis A


    Abstract Background Two Phase III randomized controlled clinical trials were conducted to assess the efficacy, safety, and tolerability of weekly subcutaneous administration of efalizumab for the treatment of psoriasis. Patient reported measures of psoriasis-related functionality and health-related quality of life and of psoriasis-related symptom assessments were included as part of the trials. Objective To assess the reliability, validity, and responsiveness of the patient reported outcome m...

  13. Reliability of three-dimensional gait analysis in cervical spondylotic myelopathy.

    McDermott, Ailish


    Gait impairment is one of the primary symptoms of cervical spondylotic myelopathy (CSM). Detailed assessment is possible using three-dimensional gait analysis (3DGA), however the reliability of 3DGA for this population has not been established. The aim of this study was to evaluate the test-retest reliability of temporal-spatial, kinematic and kinetic parameters in a CSM population.


    R.K. Agnihotri


    Full Text Available The present paper deals with the reliability analysis of a system of boiler used in garment industry.The system consists of a single unit of boiler which plays an important role in garment industry. Usingregenerative point technique with Markov renewal process various reliability characteristics of interest areobtained.

  15. Convergence among Data Sources, Response Bias, and Reliability and Validity of a Structured Job Analysis Questionnaire.

    Smith, Jack E.; Hakel, Milton D.


    Examined are questions pertinent to the use of the Position Analysis Questionnaire: Who can use the PAQ reliably and validly? Must one rely on trained job analysts? Can people having no direct contact with the job use the PAQ reliably and validly? Do response biases influence PAQ responses? (Author/KC)

  16. Risk and reliability analysis theory and applications : in honor of Prof. Armen Der Kiureghian


    This book presents a unique collection of contributions from some of the foremost scholars in the field of risk and reliability analysis. Combining the most advanced analysis techniques with practical applications, it is one of the most comprehensive and up-to-date books available on risk-based engineering. All the fundamental concepts needed to conduct risk and reliability assessments are covered in detail, providing readers with a sound understanding of the field and making the book a powerful tool for students and researchers alike. This book was prepared in honor of Professor Armen Der Kiureghian, one of the fathers of modern risk and reliability analysis.

  17. Investigation of Common Symptoms of Cancer and Reliability Analysis


    Objective: To identify cancer distribution and treatment requirements, a questionnaire on cancer patients was conducted. It was our objective to validate a series of symptoms commonly used in traditional Chinese medicine (TCM). Methods: The M. D. Anderson Symptom Assessment Inventory (MDASI) was used with 10 more TCM items added. Questions regarding TCM application requested in cancer care were also asked. A multi-center, cross-sectional study was conducted in 340 patients from 4 hospitals in Beijing and Dalian. SPSS and Excel software were adopted for statistical analysis. The questionnaire was self-evaluated with the Cronbach's alpha score. Results: The most common symptoms were fatigue 89.4%, sleep disturbance 74.4%, dry mouth 72.9%, poor appetite 72.9%, and difficulty remembering 71.2%. These symptoms affected work (89.8%), mood (82.6%),and activity (76.8%), resulting in poor quality of life. Eighty percent of the patients wanted to regulate the body with TCM. Almost 100% of the patients were interested in acquiring knowledge regarding the integrated traditional Chinese medicine (TCM) and Western medicine (WM) in the treatment and rehabilitation of cancer. Cronbach's alpha score indicated that there was acceptable internal consistency within both the MDASI and TCM items, 0.86 for MDASI, 0.78 for TCM, and 0.90 for MDASI-TCM (23 items). Conclusions: Fatigue, sleep disturbance, dry mouth, poor appetite, and difficulty remembering are the most common symptoms in cancer patients. These greatly affect the quality of life for these patients. Patients expressed a strong desire for TCM holistic regulation. The MDASI and its TCM-adapted model could be a critical tool for the quantitative study of TCM symptoms.


    Yao Chengyu; Zhao Jingyi


    To overcome the design limitations of traditional hydraulic control system for synthetic rubber press and such faults as high fault rate, low reliability, high energy-consuming and which always led to shutting down of post-treatment product line for synthetic rubber, brand-new hydraulic system combining with PC control and two-way cartridge valves for the press is developed, whose reliability is analyzed, reliability model of the hydraulic system for the press is established by analyzing processing steps, and reliability simulation of each step and the whole system is carried out by software MATLAB, which is verified through reliability test. The fixed time test has proved not that theory analysis is sound, but the system has characteristics of reasonable design and high reliability,and can lower the required power supply and operational energy cost.

  19. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    Yixiong Feng


    Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.

  20. Reliability analysis of shallow foundations by means of limit analysis with random slip lines

    Pula, Wojciech; Chwała, Marcin


    In order to evaluate credible reliability measures when bearing capacity of a shallow foundation is considered it is reasonable to describe soil strength properties in terms of random field's theory. As a next step the selected random field can be spatially averaged by means of a procedure introduced by Vanmarcke (1977). Earlier experiences have proved that, without applying spatial averaging procedure, reliability computations carried out in the context of foundation's bearing capacity had given significantly small values of reliability indices (large values of failure's probability) even for foundations which were considered as relatively safe. On the other hand the volume of the area under averaging strongly affects results of reliability computations. Hence the selection of the averaged area constitutes a vital problem and has to be dependent on the failure mechanism under consideration. In the present study local averages associated with kinematically admissible mechanism of failure proposed by Prandtl (1920) are considered. Soil strength parameters are assumed to constitute anisotropic random fields with different values of vertical and horizontal fluctuation scales. These fields are subjected to averaging along potential slip lines within the mechanism under consideration. Due to random fluctuations of the angle of internal friction the location of a slip line is changeable. Therefore it was necessary to solve the problem of spatial averaging of the random field along the varying slip lines. In order to incorporate an anisotropy of soil properties random fields the vertical correlation length was assumed to significantly shorter than the horizontal one. Finally, reliability indices were evaluated for foundations of various width by means of the Monte Carlo simulation. By numerical examples it is demonstrated that for reasonable proportions (from practical viewpoint) between horizontal and vertical fluctuation scales the reliability indices resulting in two

  1. Investigation on design and reliability analysis of a new deployable and lockable mechanism

    Lin, Qing; Nie, Hong; Ren, Jie; Chen, Jinbao


    The traditional structure of the deployable and lockable mechanism on soft-landing gear system is complicated and unreliable. To overcome the defects, a new deployable and lockable mechanism for planetary probes is developed. The compression assembly shares a set of new mechanism with the deployment assembly and locking assembly. The new mechanism shows some advantages: more steady deployment, simpler mechanism and higher reliability. This paper presents an introduction of the deployment and locking theory of the new mechanism, and constitutes the fault tree, which would contribute to qualitative and quantitative analyses. In addition, probability importance and criticality importance of the new mechanism are derived and calculated. The reliability modeling and analysis of the mechanism are accomplished from static torque margin, torque and the work by torque. In investigation results, reliability rate that the new mechanism could deploy successfully is 0.999334. The crucial problems concentrate on the insufficiency of storage force torque of high strength spring, the lubrication failure between the inner cylinder and the outer cylinder of the strut and the stuck soft-landing gear system. And then, the paper presents some improvement approaches and suggestions according to the problems discussed above.

  2. Mathematical modeling and reliability analysis of a 3D Li-ion battery



    Full Text Available The three-dimensional (3D Li-ion battery presents an effective solution to issues affecting its two-dimensional counterparts, as it is able to attain high energy capacities for the same areal footprint without sacrificing power density. A 3D battery has key structural features extending in and fully utilizing 3D space, allowing it to achieve greater reliability and longevity. This study applies an electrochemical-thermal coupled model to a checkerboard array of alternating positive and negative electrodes in a 3D architecture with either square or circular electrodes. The mathematical model comprises the transient conservation of charge, species, and energy together with electroneutrality, constitutive relations and relevant initial and boundary conditions. A reliability analysis carried out to simulate malfunctioning of either a positive or negative electrode reveals that although there are deviations in electrochemical and thermal behavior for electrodes adjacent to the malfunctioning electrode as compared to that in a fully-functioning array, there is little effect on electrodes further away, demonstrating the redundancy that a 3D electrode array provides. The results demonstrate that implementation of 3D batteries allow it to reliably and safely deliver power even if a component malfunctions, a strong advantage over conventional 2D batteries.

  3. Reliability analysis of instrument design of noninvasive bone marrow disease detector

    Su, Yu; Li, Ting; Sun, Yunlong


    Bone marrow is an important hematopoietic organ, and bone marrow lesions (BMLs) may cause a variety of complications with high death rate and short survival time. Early detection and follow up care are particularly important. But the current diagnosis methods rely on bone marrow biopsy/puncture, with significant limitations such as invasion, complex operation, high risk, and discontinuous. It is highly in need of a non-invasive, safe, easily operated, and continuous monitoring technology. So we proposed to design a device aimed for detecting bone marrow lesions, which was based on near infrared spectrum technology. Then we fully tested its reliabilities, including the sensitivity, specificity, signal-to-noise ratio (SNR), stability, and etc. Here, we reported this sequence of reliability test experiments, the experimental results, and the following data analysis. This instrument was shown to be very sensitive, with distinguishable concentration less than 0.002 and with good linearity, stability and high SNR. Finally, these reliability-test data supported the promising clinical diagnosis and surgery guidance of our novel instrument in detection of BMLs.


    Ronald L. Boring; David I. Gertman; Jeffrey C. Joe; Julie L. Marble


    An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings with HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.

  5. Reactor scram experience for shutdown system reliability analysis. [BWR; PWR

    Edison, G.E.; Pugliese, S.L.; Sacramo, R.F.


    Scram experience in a number of operating light water reactors has been reviewed. The date and reactor power of each scram was compiled from monthly operating reports and personal communications with the operating plant personnel. The average scram frequency from ''significant'' power (defined as P/sub trip//P/sub max greater than/ approximately 20 percent) was determined as a function of operating life. This relationship was then used to estimate the total number of reactor trips from above approximately 20 percent of full power expected to occur during the life of a nuclear power plant. The shape of the scram frequency vs. operating life curve resembles a typical reliability bathtub curve (failure rate vs. time), but without a rising ''wearout'' phase due to the lack of operating data near the end of plant design life. For this case the failures are represented by ''bugs'' in the plant system design, construction, and operation which lead to scram. The number of scrams would appear to level out at an average of around three per year; the standard deviations from the mean value indicate an uncertainty of about 50 percent. The total number of scrams from significant power that could be expected in a plant designed for a 40-year life would be about 130 if no wearout phase develops near the end of life.

  6. Intraobserver and intermethod reliability for using two different computer programs in preoperative lower limb alignment analysis

    Mohamed Kenawey


    Conclusion: Computer assisted lower limb alignment analysis is reliable whether using graphics editing program or specialized planning software. However slight higher variability for angles away from the knee joint can be expected.

  7. Reliability of 3D upper limb motion analysis in children with obstetric brachial plexus palsy.

    Mahon, Judy; Malone, Ailish; Kiernan, Damien; Meldrum, Dara


    Kinematics, measured by 3D upper limb motion analysis (3D-ULMA), can potentially increase understanding of movement patterns by quantifying individual joint contributions. Reliability in children with obstetric brachial plexus palsy (OBPP) has not been established.

  8. Use of Fault Tree Analysis for Automotive Reliability and Safety Analysis

    Lambert, H


    Fault tree analysis (FTA) evolved from the aerospace industry in the 1960's. A fault tree is deductive logic model that is generated with a top undesired event in mind. FTA answers the question, ''how can something occur?'' as opposed to failure modes and effects analysis (FMEA) that is inductive and answers the question, ''what if?'' FTA is used in risk, reliability and safety assessments. FTA is currently being used by several industries such as nuclear power and chemical processing. Typically the automotive industries uses failure modes and effects analysis (FMEA) such as design FMEAs and process FMEAs. The use of FTA has spread to the automotive industry. This paper discusses the use of FTA for automotive applications. With the addition automotive electronics for various applications in systems such as engine/power control, cruise control and braking/traction, FTA is well suited to address failure modes within these systems. FTA can determine the importance of these failure modes from various perspectives such as cost, reliability and safety. A fault tree analysis of a car starting system is presented as an example.

  9. Ex vivo normothermic machine perfusion is safe, simple, and reliable: results from a large animal model.

    Nassar, Ahmed; Liu, Qiang; Farias, Kevin; D'Amico, Giuseppe; Tom, Cynthia; Grady, Patrick; Bennett, Ana; Diago Uso, Teresa; Eghtesad, Bijan; Kelly, Dympna; Fung, John; Abu-Elmagd, Kareem; Miller, Charles; Quintini, Cristiano


    Normothermic machine perfusion (NMP) is an emerging preservation modality that holds the potential to prevent the injury associated with low temperature and to promote organ repair that follows ischemic cell damage. While several animal studies have showed its superiority over cold storage (CS), minimal studies in the literature have focused on safety, feasibility, and reliability of this technology, which represent key factors in its implementation into clinical practice. The aim of the present study is to report safety and performance data on NMP of DCD porcine livers. After 60 minutes of warm ischemia time, 20 pig livers were preserved using either NMP (n = 15; physiologic perfusion temperature) or CS group (n = 5) for a preservation time of 10 hours. Livers were then tested on a transplant simulation model for 24 hours. Machine safety was assessed by measuring system failure events, the ability to monitor perfusion parameters, sterility, and vessel integrity. The ability of the machine to preserve injured organs was assessed by liver function tests, hemodynamic parameters, and histology. No system failures were recorded. Target hemodynamic parameters were easily achieved and vascular complications were not encountered. Liver function parameters as well as histology showed significant differences between the 2 groups, with NMP livers showing preserved liver function and histological architecture, while CS livers presenting postreperfusion parameters consistent with unrecoverable cell injury. Our study shows that NMP is safe, reliable, and provides superior graft preservation compared to CS in our DCD porcine model. © The Author(s) 2014.

  10. The reliability analysis of a separated, dual fail operational redundant strapdown IMU. [inertial measurement unit

    Motyka, P.


    A methodology for quantitatively analyzing the reliability of redundant avionics systems, in general, and the dual, separated Redundant Strapdown Inertial Measurement Unit (RSDIMU), in particular, is presented. The RSDIMU is described and a candidate failure detection and isolation system presented. A Markov reliability model is employed. The operational states of the system are defined and the single-step state transition diagrams discussed. Graphical results, showing the impact of major system parameters on the reliability of the RSDIMU system, are presented and discussed.

  11. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Shirley, Rachel Elizabeth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States)


    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  12. Development of A Standard Method for Human Reliability Analysis (HRA) of Nuclear Power Plants

    Kang, Dae Il; Jung, Won Dea; Kim, Jae Whan


    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME and ANS PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME and ANS HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  13. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants

    Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan


    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  14. Reliability analysis of a gravity-based foundation for wind turbines

    Vahdatirad, Mohammad Javad; Griffiths, D. V.; Andersen, Lars Vabbersgaard


    Deterministic code-based designs proposed for wind turbine foundations, are typically biased on the conservative side, and overestimate the probability of failure which can lead to higher than necessary construction cost. In this study reliability analysis of a gravity-based foundation concerning...... technique to perform the reliability analysis. The calibrated code-based design approach leads to savings of up to 20% in the concrete foundation volume, depending on the target annual reliability level. The study can form the basis for future optimization on deterministic-based designs for wind turbine...... foundations....

  15. Strength and Reliability of Wood for the Components of Low-cost Wind Turbines: Computational and Experimental Analysis and Applications

    Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan


    of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...

  16. Reliability Block Diagram (RBD) Analysis of NASA Dryden Flight Research Center (DFRC) Flight Termination System and Power Supply

    Morehouse, Dennis V.


    In order to perform public risk analyses for vehicles containing Flight Termination Systems (FTS), it is necessary for the analyst to know the reliability of each of the components of the FTS. These systems are typically divided into two segments; a transmitter system and associated equipment, typically in a ground station or on a support aircraft, and a receiver system and associated equipment on the target vehicle. This analysis attempts to analyze the reliability of the NASA DFRC flight termination system ground transmitter segment for use in the larger risk analysis and to compare the results against two established Department of Defense availability standards for such equipment.

  17. A Review: Passive System Reliability Analysis – Accomplishments and Unresolved Issues



    Full Text Available Reliability assessment of passive safety systems is one of the important issues, since safety of advanced nuclear reactors rely on several passive features. In this context, a few methodologies such as Reliability Evaluation of Passive Safety System (REPAS, Reliability Methods for Passive Safety Functions (RMPS and Analysis of Passive Systems ReliAbility (APSRA have been developed in the past. These methodologies have been used to assess reliability of various passive safety systems. While these methodologies have certain features in common, but they differ in considering certain issues; for example, treatment of model uncertainties, deviation of geometric and process parameters from their nominal values, etc. This paper presents the state of the art on passive system reliability assessment methodologies, the accomplishments and remaining issues. In this review three critical issues pertaining to passive systems performance and reliability have been identified. The first issue is, applicability of best estimate codes and model uncertainty. The best estimate codes based phenomenological simulations of natural convection passive systems could have significant amount of uncertainties, these uncertainties must be incorporated in appropriate manner in the performance and reliability analysis of such systems. The second issue is the treatment of dynamic failure characteristics of components of passive systems. REPAS, RMPS and APSRA methodologies do not consider dynamic failures of components or process, which may have strong influence on the failure of passive systems. The influence of dynamic failure characteristics of components on system failure probability is presented with the help of a dynamic reliability methodology based on Monte Carlo simulation. The analysis of a benchmark problem of Hold-up tank shows the error in failure probability estimation by not considering the dynamism of components. It is thus suggested that dynamic reliability

  18. Stochastic Response and Reliability Analysis of Hysteretic Structures

    Mørk, Kim Jørgensen

    During the last 30 years response analysis of structures under random excitation has been studied in detail. These studies are motivated by the fact that most of natures excitations, such as earthquakes, wind and wave loads exhibit randomly fluctuating characters. For safety reasons this randomness...

  19. reliability analysis of a two span floor designed according to ...


    The Structural analysis and design of the timber floor system was carried out using deterministic approach ... The cell structure of hardwoods is more complex than ..... [12] BS EN -1-1: Eurocode 5: Design of Timber Structures, Part. 1-1.

  20. Structured information analysis for human reliability analysis of emergency tasks in nuclear power plants

    Jung, Won Dea; Kim, Jae Whan; Park, Jin Kyun; Ha, Jae Joo [Korea Atomic Energy Research Institute, Taejeon (Korea)


    More than twenty HRA (Human Reliability Analysis) methodologies have been developed and used for the safety analysis in nuclear field during the past two decades. However, no methodology appears to have universally been accepted, as various limitations have been raised for more widely used ones. One of the most important limitations of conventional HRA is insufficient analysis of the task structure and problem space. To resolve this problem, we suggest SIA (Structured Information Analysis) for HRA. The proposed SIA consists of three parts. The first part is the scenario analysis that investigates the contextual information related to the given task on the basis of selected scenarios. The second is the goals-means analysis to define the relations between the cognitive goal and task steps. The third is the cognitive function analysis module that identifies the cognitive patterns and information flows involved in the task. Through the three-part analysis, systematic investigation is made possible from the macroscopic information on the tasks to the microscopic information on the specific cognitive processes. It is expected that analysts can attain a structured set of information that helps to predict the types and possibility of human error in the given task. 48 refs., 12 figs., 11 tabs. (Author)

  1. Structured information analysis for human reliability analysis of emergency tasks in nuclear power plants

    Jung, Won Dea; Kim, Jae Whan; Park, Jin Kyun; Ha, Jae Joo [Korea Atomic Energy Research Institute, Taejeon (Korea)


    More than twenty HRA (Human Reliability Analysis) methodologies have been developed and used for the safety analysis in nuclear field during the past two decades. However, no methodology appears to have universally been accepted, as various limitations have been raised for more widely used ones. One of the most important limitations of conventional HRA is insufficient analysis of the task structure and problem space. To resolve this problem, we suggest SIA (Structured Information Analysis) for HRA. The proposed SIA consists of three parts. The first part is the scenario analysis that investigates the contextual information related to the given task on the basis of selected scenarios. The second is the goals-means analysis to define the relations between the cognitive goal and task steps. The third is the cognitive function analysis module that identifies the cognitive patterns and information flows involved in the task. Through the three-part analysis, systematic investigation is made possible from the macroscopic information on the tasks to the microscopic information on the specific cognitive processes. It is expected that analysts can attain a structured set of information that helps to predict the types and possibility of human error in the given task. 48 refs., 12 figs., 11 tabs. (Author)

  2. Investigating performance, reliability and safety parameters of photovoltaic module inverter: Test results and compliances with the standards

    Islam, Saiful; Belmans, Ronnie [Department of Electrical Engineering, Katholieke Universiteit Leuven, ESAT/ELECTA, Kasteelpark Arenberg 10, B-3001 Leuven (Belgium); Woyte, Achim [Verenigingsstraat 39, B-1000 Brussel (Belgium); Heskes, P.J.M.; Rooij, P.M. [Energy Research Centre of the Netherlands ECN, P.O. Box 1, 1755 ZG Petten (Netherlands)


    Reliability, safety and quality requirements for a new type of photovoltaic module inverter have been identified and its performance has been evaluated for prototypes. The laboratory tests have to show whether the so-called second generation photovoltaic module inverter can comply with the expectations and where improvements are still necessary. Afterwards, the test results have been compared with the internationals standards. (author)

  3. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Xin-Jia Meng


    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  4. Sensitivity analysis for reliable design verification of nuclear turbosets

    Zentner, Irmela, E-mail: irmela.zentner@edf.f [Lamsid-Laboratory for Mechanics of Aging Industrial Structures, UMR CNRS/EDF, 1, avenue Du General de Gaulle, 92141 Clamart (France); EDF R and D-Structural Mechanics and Acoustics Department, 1, avenue Du General de Gaulle, 92141 Clamart (France); Tarantola, Stefano [Joint Research Centre of the European Commission-Institute for Protection and Security of the Citizen, T.P. 361, 21027 Ispra (Italy); Rocquigny, E. de [Ecole Centrale Paris-Applied Mathematics and Systems Department (MAS), Grande Voie des Vignes, 92 295 Chatenay-Malabry (France)


    In this paper, we present an application of sensitivity analysis for design verification of nuclear turbosets. Before the acquisition of a turbogenerator, energy power operators perform independent design assessment in order to assure safe operating conditions of the new machine in its environment. Variables of interest are related to the vibration behaviour of the machine: its eigenfrequencies and dynamic sensitivity to unbalance. In the framework of design verification, epistemic uncertainties are preponderant. This lack of knowledge is due to inexistent or imprecise information about the design as well as to interaction of the rotating machinery with supporting and sub-structures. Sensitivity analysis enables the analyst to rank sources of uncertainty with respect to their importance and, possibly, to screen out insignificant sources of uncertainty. Further studies, if necessary, can then focus on predominant parameters. In particular, the constructor can be asked for detailed information only about the most significant parameters.

  5. A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation

    U. Ayala


    Full Text Available Interruptions in cardiopulmonary resuscitation (CPR compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies.

  6. A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation

    Ayala, U.; Irusta, U.; Ruiz, J.; Eftestøl, T.; Kramer-Johansen, J.; Alonso-Atienza, F.; Alonso, E.; González-Otero, D.


    Interruptions in cardiopulmonary resuscitation (CPR) compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA) designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies. PMID:24895621

  7. Finite State Machine Based Evaluation Model for Web Service Reliability Analysis

    M, Thirumaran; Abarna, S; P, Lakshmi


    Now-a-days they are very much considering about the changes to be done at shorter time since the reaction time needs are decreasing every moment. Business Logic Evaluation Model (BLEM) are the proposed solution targeting business logic automation and facilitating business experts to write sophisticated business rules and complex calculations without costly custom programming. BLEM is powerful enough to handle service manageability issues by analyzing and evaluating the computability and traceability and other criteria of modified business logic at run time. The web service and QOS grows expensively based on the reliability of the service. Hence the service provider of today things that reliability is the major factor and any problem in the reliability of the service should overcome then and there in order to achieve the expected level of reliability. In our paper we propose business logic evaluation model for web service reliability analysis using Finite State Machine (FSM) where FSM will be extended to analy...

  8. Reliability analysis and risk-based methods for planning of operation & maintenance of offshore wind turbines

    Sørensen, John Dalsgaard


    for extreme and fatigue limit states are presented. Operation & Maintenance planning often follows corrective and preventive strategies based on information from condition monitoring and structural health monitoring systems. A reliability- and riskbased approach is presented where a life-cycle approach......Reliability analysis and probabilistic models for wind turbines are considered with special focus on structural components and application for reliability-based calibration of partial safety factors. The main design load cases to be considered in design of wind turbine components are presented...... including the effects of the control system and possible faults due to failure of electrical / mechanical components. Considerations are presented on the target reliability level for wind turbine structural components. Application is shown for reliability-based calibrations of partial safety factors...

  9. Reliability analysis of M/G/1 queues with general retrial times and server breakdowns

    WANG Jinting


    This paper concerns the reliability issues as well as queueing analysis of M/G/1 retrial queues with general retrial times and server subject to breakdowns and repairs. We assume that the server is unreliable and customers who find the server busy or down are queued in the retrial orbit in accordance with a first-come-first-served discipline. Only the customer at the head of the orbit queue is allowed for access to the server. The necessary and sufficient condition for the system to be stable is given. Using a supplementary variable method, we obtain the Laplace-Stieltjes transform of the reliability function of the server and a steady state solution for both queueing and reliability measures of interest. Some main reliability indexes, such as the availability, failure frequency, and the reliability function of the server, are obtained.

  10. Waste Feed Delivery System Phase 1 Preliminary Reliability and Availability and Maintainability Analysis [SEC 1 and 2



    The document presents updated results of the preliminary reliability, availability, maintainability analysis performed for delivery of waste feed from tanks 241-AZ-101 and 241-AN-105 to British Nuclear Fuels Limited, inc. under the Tank Waste Remediation System Privatization Contract. The operational schedule delay risk is estimated and contributing factors are discussed.

  11. Analysis of the influence of input data uncertainties on determining the reliability of reservoir storage capacity

    Marton Daniel


    Full Text Available The paper contains a sensitivity analysis of the influence of uncertainties in input hydrological, morphological and operating data required for a proposal for active reservoir conservation storage capacity and its achieved values. By introducing uncertainties into the considered inputs of the water management analysis of a reservoir, the subsequent analysed reservoir storage capacity is also affected with uncertainties. The values of water outflows from the reservoir and the hydrological reliabilities are affected with uncertainties as well. A simulation model of reservoir behaviour has been compiled with this kind of calculation as stated below. The model allows evaluation of the solution results, taking uncertainties into consideration, in contributing to a reduction in the occurrence of failure or lack of water during reservoir operation in low-water and dry periods.

  12. Enforsing a system approach to composite failure criteria for reliability analysis

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian


    parameters are random, multiple failure modes may be identified which will jeopardize the FORM analysis and a system approach should be applied to assure a correct analysis. Although crude Monte Carlo simulation automatically may account for such effects, time constraints limit its useability in problems...... challenges with the use of failure criteria, since composite materials are a discontinuous medium, which invoke multiple failure modes. Under deterministic conditions the material properties and the stress vector are constant and will result in a single dominating failure mode. When any of these input...... involving advanced FEM models. When applying more computationally efficient methods based on FORM/SORM it is important to carefully account for the multiple failure modes described by the failure criterion. The present paper discusses how to handle this problem and presents examples where reliability...

  13. A Comprehensive Comparison of Different Clustering Methods for Reliability Analysis of Microarray Data

    Kafieh, Rahele; Mehridehnavi, Alireza


    In this study, we considered some competitive learning methods including hard competitive learning and soft competitive learning with/without fixed network dimensionality for reliability analysis in microarrays. In order to have a more extensive view, and keeping in mind that competitive learning methods aim at error minimization or entropy maximization (different kinds of function optimization), we decided to investigate the abilities of mixture decomposition schemes. Therefore, we assert that this study covers the algorithms based on function optimization with particular insistence on different competitive learning methods. The destination is finding the most powerful method according to a pre-specified criterion determined with numerical methods and matrix similarity measures. Furthermore, we should provide an indication showing the intrinsic ability of the dataset to form clusters before we apply a clustering algorithm. Therefore, we proposed Hopkins statistic as a method for finding the intrinsic ability of a data to be clustered. The results show the remarkable ability of Rayleigh mixture model in comparison with other methods in reliability analysis task. PMID:24083134

  14. Number of iterations needed in Monte Carlo Simulation using reliability analysis for tunnel supports

    E. Bukaçi


    Full Text Available There are many methods in geotechnical engineering which could take advantage of Monte Carlo Simulation to establish probability of failure, since closed form solutions are almost impossible to use in most cases. The problem that arises with using Monte Carlo Simulation is the number of iterations needed for a particular simulation.This article will show why it’s important to calculate number of iterations needed for Monte Carlo Simulation used in reliability analysis for tunnel supports using convergence – confinement method. Number if iterations needed will be calculated with two methods. In the first method, the analyst has to accept a distribution function for the performance function. The other method suggested by this article is to calculate number of iterations based on the convergence of the factor the analyst is interested in the calculation. Reliability analysis will be performed for the diversion tunnel in Rrëshen, Albania, by using both methods mentioned and results will be confronted

  15. Estimating Between-Person and Within-Person Subscore Reliability with Profile Analysis.

    Bulut, Okan; Davison, Mark L; Rodriguez, Michael C


    Subscores are of increasing interest in educational and psychological testing due to their diagnostic function for evaluating examinees' strengths and weaknesses within particular domains of knowledge. Previous studies about the utility of subscores have mostly focused on the overall reliability of individual subscores and ignored the fact that subscores should be distinct and have added value over the total score. This study introduces a profile reliability approach that partitions the overall subscore reliability into within-person and between-person subscore reliability. The estimation of between-person reliability and within-person reliability coefficients is demonstrated using subscores from number-correct scoring, unidimensional and multidimensional item response theory scoring, and augmented scoring approaches via a simulation study and a real data study. The effects of various testing conditions, such as subtest length, correlations among subscores, and the number of subtests, are examined. Results indicate that there is a substantial trade-off between within-person and between-person reliability of subscores. Profile reliability coefficients can be useful in determining the extent to which subscores provide distinct and reliable information under various testing conditions.

  16. A new approach to real-time reliability analysis of transmission system using fuzzy Markov model

    Tanrioven, M.; Kocatepe, C. [University of Yildiz Technical, Istanbul (Turkey). Dept. of Electrical Engineering; Wu, Q.H.; Turner, D.R.; Wang, J. [Liverpool Univ. (United Kingdom). Dept. of Electrical Engineering and Economics


    To date the studies of power system reliability over a specified time period have used average values of the system transition rates in Markov techniques. [Singh C, Billinton R. System reliability modeling and evaluation. London: Hutchison Educational; 1977]. However, the level of power systems reliability varies from time to time due to weather conditions, power demand and random faults [Billinton R, Wojczynski E. Distributional variation of distribution system reliability indices. IEEE Trans Power Apparatus Systems 1985; PAS-104(11):3152-60]. It is essential to obtain an estimate of system reliability under all environmental and operating conditions. In this paper, fuzzy logic is used in the Markov model to describe both transition rates and temperature-based seasonal variations, which identifies multiple weather conditions such as normal, less stormy, very stormy, etc. A three-bus power system model is considered to determine the variation of system reliability in real-time, using this newly developed fuzzy Markov model (FMM). The results cover different aspects such as daily and monthly reliability changes during January and August. The reliability of the power transmission system is derived as a function of augmentation in peak load level. Finally the variation of the system reliability with weather conditions is determined. (author)

  17. Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD): Development of Image Analysis Criteria and Examiner Reliability for Image Analysis

    Ahmad, Mansur; Hollender, Lars; Odont; Anderson, Quentin; Kartha, Krishnan; Ohrbach, Richard K.; Truelove, Edmond L.; John, Mike T.; Schiffman, Eric L.


    Introduction As a part of a multi-site RDC/TMD Validation Project, comprehensive TMJ diagnostic criteria were developed for image analysis using panoramic radiography, magnetic resonance imaging (MRI), and computed tomography (CT). Methods Inter-examiner reliability was estimated using the kappa (k) statistic, and agreement between rater pairs was characterized by overall, positive, and negative percent agreement. CT was the reference standard for assessing validity of other imaging modalities for detecting osteoarthritis (OA). Results For the radiological diagnosis of OA, reliability of the three examiners was poor for panoramic radiography (k = 0.16), fair for MRI (k = 0.46), and close to the threshold for excellent for CT (k = 0.71). Using MRI, reliability was excellent for diagnosing disc displacements (DD) with reduction (k = 0.78) and for DD without reduction (k = 0.94), and was good for effusion (k = 0.64). Overall percent agreement for pair-wise ratings was ≥ 82% for all conditions. Positive percent agreement for diagnosing OA was 19% for panoramic radiography, 59% for MRI, and 84% for CT. Using MRI, positive percent agreement for diagnoses of any DD was 95% and for effusion was 81%. Negative percent agreement was ≥ 88% for all conditions. Compared to CT, panoramic radiography and MRI had poor to marginal sensitivity, respectively, but excellent specificity, in detecting OA. Conclusion Comprehensive image analysis criteria for RDC/TMD Validation Project were developed, which can reliably be employed for assessing OA using CT, and for disc position and effusion using MRI. PMID:19464658

  18. Strategy for Synthesis of Flexible Heat Exchanger Networks Embedded with System Reliability Analysis

    YI Dake; HAN Zhizhong; WANG Kefeng; YAO Pingjing


    System reliability can produce a strong influence on the performance of the heat exchanger network (HEN).In this paper,an optimization method with system reliability analysis for flexible HEN by genetic/simulated annealing algorithms (GA/SA) is presented.Initial flexible arrangements of HEN is received by pseudo-temperature enthalpy diagram.For determining system reliability of HEN,the connections of heat exchangers(HEXs) and independent subsystems in the HEN are analyzed by the connection sequence matrix(CSM),and the system reliability is measured by the independent subsystem including maximum number of HEXs in the HEN.As for the HEN that did not meet system reliability,HEN decoupling is applied and the independent subsystems in the HEN are changed by removing decoupling HEX,and thus the system reliability is elevated.After that,heat duty redistribution based on the relevant elements of the heat load loops and HEX areas are optimized in GA/SA.Then,the favorable network configuration,which matches both the most economical cost and system reliability criterion,is located.Moreover,particular features belonging to suitable decoupling HEX are extracted from calculations.Corresponding numerical example is presented to verify that the proposed strategy is effective to formulate optimal flexible HEN with system reliability measurement.

  19. The Revised Child Anxiety and Depression Scale: A systematic review and reliability generalization meta-analysis.

    Piqueras, Jose A; Martín-Vivar, María; Sandin, Bonifacio; San Luis, Concepción; Pineda, David


    Anxiety and depression are among the most common mental disorders during childhood and adolescence. Among the instruments for the brief screening assessment of symptoms of anxiety and depression, the Revised Child Anxiety and Depression Scale (RCADS) is one of the more widely used. Previous studies have demonstrated the reliability of the RCADS for different assessment settings and different versions. The aims of this study were to examine the mean reliability of the RCADS and the influence of the moderators on the RCADS reliability. We searched in EBSCO, PsycINFO, Google Scholar, Web of Science, and NCBI databases and other articles manually from lists of references of extracted articles. A total of 146 studies were included in our meta-analysis. The RCADS showed robust internal consistency reliability in different assessment settings, countries, and languages. We only found that reliability of the RCADS was significantly moderated by the version of RCADS. However, these differences in reliability between different versions of the RCADS were slight and can be due to the number of items. We did not examine factor structure, factorial invariance across gender, age, or country, and test-retest reliability of the RCADS. The RCADS is a reliable instrument for cross-cultural use, with the advantage of providing more information with a low number of items in the assessment of both anxiety and depression symptoms in children and adolescents. Copyright © 2017. Published by Elsevier B.V.

  20. Performance and reliability analysis of water distribution systems under cascading failures and the identification of crucial pipes.

    Shuang, Qing; Zhang, Mingyuan; Yuan, Yongbo


    As a mean of supplying water, Water distribution system (WDS) is one of the most important complex infrastructures. The stability and reliability are critical for urban activities. WDSs can be characterized by networks of multiple nodes (e.g. reservoirs and junctions) and interconnected by physical links (e.g. pipes). Instead of analyzing highest failure rate or highest betweenness, reliability of WDS is evaluated by introducing hydraulic analysis and cascading failures (conductive failure pattern) from complex network. The crucial pipes are identified eventually. The proposed methodology is illustrated by an example. The results show that the demand multiplier has a great influence on the peak of reliability and the persistent time of the cascading failures in its propagation in WDS. The time period when the system has the highest reliability is when the demand multiplier is less than 1. There is a threshold of tolerance parameter exists. When the tolerance parameter is less than the threshold, the time period with the highest system reliability does not meet minimum value of demand multiplier. The results indicate that the system reliability should be evaluated with the properties of WDS and the characteristics of cascading failures, so as to improve its ability of resisting disasters.

  1. Reliability and life-cycle analysis of deteriorating systems

    Sánchez-Silva, Mauricio


    This book compiles and critically discusses modern engineering system degradation models and their impact on engineering decisions. In particular, the authors focus on modeling the uncertain nature of degradation considering both conceptual discussions and formal mathematical formulations. It also describes the basics concepts and the various modeling aspects of life-cycle analysis (LCA).  It highlights the role of degradation in LCA and defines optimum design and operation parameters. Given the relationship between operational decisions and the performance of the system’s condition over time, maintenance models are also discussed. The concepts and models presented have applications in a large variety of engineering fields such as Civil, Environmental, Industrial, Electrical and Mechanical engineering. However, special emphasis is given to problems related to large infrastructure systems. The book is intended to be used both as a reference resource for researchers and practitioners and as an academic text ...

  2. Reliability Analysis of Temperature Influence on Stresses in Rigid Pavement Made from Recycled Materials

    Aleš Florian


    Full Text Available Complex statistical and sensitivity analysis of principal stresses in concrete slabs of the real type of rigid pavement made from recycled materials is performed. The pavement is dominantly loaded by the temperature field acting on the upper and lower surface of concrete slabs. The computational model of the pavement is designed as a spatial (3D model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangement of joints, and the entire model can be loaded by thermal load. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers including soil to the depth of about 3 m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The simulation technique Updated Latin Hypercube Sampling with 20 simulations is used for the reliability analysis. As results of statistical analysis, the estimates of basic statistics of the principal stresses σ1 and σ3 in 106 points on the upper and lower surface of slabs are obtained. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is used. As results of sensitivity analysis, the estimates of influence of random variability of individual input variables on the random variability of principal stresses σ1 and σ3 are obtained.

  3. Application of response surface method for contact fatigue reliability analysis of spur gear with consideration of EHL

    胡贇; 刘少军; 丁晟; 廖雅诗


    In order to consider the effects of elastohydrodynamic lubrication (EHL) on contact fatigue reliability of spur gear, an accurate and efficient method that combines with response surface method (RSM) and first order second moment method (FOSM) was developed for estimating the contact fatigue reliability of spur gear under EHL. The mechanical model of contact stress analysis of spur gear under EHL was established, in which the oil film pressure was mapped into hertz contact zone. Considering the randomness of EHL, material properties and fatigue strength correction factors, the proposed method was used to analyze the contact fatigue reliability of spur gear under EHL. Compared with the results of 1.5×105 by traditional Monte-Carlo, the difference between the two failure probability results calculated by the above mentioned methods is 2.2×10−4,the relative error of the failure probability results is 26.8%, and time-consuming only accounts for 0.14% of the traditional Monte-Carlo method (MCM). Sensitivity analysis results are in very good agreement with practical cognition. Analysis results show that the proposed method is precise and efficient, and could correctly reflect the influence of EHL on contact fatigue reliability of spur gear.

  4. An Investigation on the Reliability of Deformation Analysis at Simulated Network Depending on the Precise Point Position Technique

    Durdag, U. M.; Erdogan, B.; Hekimoglu, S.


    Deformation analysis plays an important role for human life safety; hence investigating the reliability of the obtained results from deformation analysis is crucial. The deformation monitoring network is established and the observations are analyzed periodically. The main problem in the deformation analysis is that if there is more than one displaced point in the monitoring network, the analysis methods smear the disturbing effects of the displaced points over all other points which are not displaced. Therefore, only one displaced point can be detected successfully. The Precise Point Positioning (PPP) gives opportunity to prevent smearing effect of the displaced points. In this study, we have simulated a monitoring network that consisting four object points and generated six different scenarios. The displacements were added to the points by using a device that the GPS antenna was easily moved horizontally and the seven hours static GPS measurements were carried out. The measurements were analyzed by using online Automatic Precise Positioning Service (APPS) to obtain the coordinates and covariance matrices. The results of the APPS were used in the deformation analysis. The detected points and true displaced points were compared with each other to obtain reliability of the method. According to the results, the analysis still detect stable points as displaced points. For the next step, we are going to search the reason of the wrong results and deal with acquiring more reliable results.

  5. Multi-dimensional reliability assessment of fractal signature analysis in an outpatient sports medicine population.

    Jarraya, Mohamed; Guermazi, Ali; Niu, Jingbo; Duryea, Jeffrey; Lynch, John A; Roemer, Frank W


    The aim of this study has been to test reproducibility of fractal signature analysis (FSA) in a young, active patient population taking into account several parameters including intra- and inter-reader placement of regions of interest (ROIs) as well as various aspects of projection geometry. In total, 685 patients were included (135 athletes and 550 non-athletes, 18-36 years old). Regions of interest (ROI) were situated beneath the medial tibial plateau. The reproducibility of texture parameters was evaluated using intraclass correlation coefficients (ICC). Multi-dimensional assessment included: (1) anterior-posterior (A.P.) vs. posterior-anterior (P.A.) (Lyon-Schuss technique) views on 102 knees; (2) unilateral (single knee) vs. bilateral (both knees) acquisition on 27 knees (acquisition technique otherwise identical; same A.P. or P.A. view); (3) repetition of the same image acquisition on 46 knees (same A.P. or P.A. view, and same unitlateral or bilateral acquisition); and (4) intra- and inter-reader reliability with repeated placement of the ROIs in the subchondral bone area on 99 randomly chosen knees. ICC values on the reproducibility of texture parameters for A.P. vs. P.A. image acquisitions for horizontal and vertical dimensions combined were 0.72 (95% confidence interval (CI) 0.70-0.74) ranging from 0.47 to 0.81 for the different dimensions. For unilateral vs. bilateral image acquisitions, the ICCs were 0.79 (95% CI 0.76-0.82) ranging from 0.55 to 0.88. For the repetition of the identical view, the ICCs were 0.82 (95% CI 0.80-0.84) ranging from 0.67 to 0.85. Intra-reader reliability was 0.93 (95% CI 0.92-0.94) and inter-observer reliability was 0.96 (95% CI 0.88-0.99). A decrease in reliability was observed with increasing voxel sizes. Our study confirms excellent intra- and inter-reader reliability for FSA, however, results seem to be affected by acquisition technique, which has not been previously recognized.

  6. Increased turbo compressor reliability by analysis of fluid structure interaction

    Smeulers, J.P.M.; González Díez, N.


    The integrity of compressors and pumps is of paramount importance for the gas and oil industry. Failures may result in serious production losses that are in no proportion with the cost of the equipment involved. Besides, the equipment may be inaccessible for maintenance for a long period of time due

  7. Increased turbo compressor reliability by analysis of fluid structure interaction

    Smeulers, J.P.M.; González Díez, N.


    The integrity of compressors and pumps is of paramount importance for the gas and oil industry. Failures may result in serious production losses that are in no proportion with the cost of the equipment involved. Besides, the equipment may be inaccessible for maintenance for a long period of time due

  8. Mechanical system reliability analysis using a combination of graph theory and Boolean function

    Tang, J


    A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis.

  9. Analysis and Application of Mechanical System Reliability Model Based on Copula Function

    An Hai


    Full Text Available There is complicated correlations in mechanical system. By using the advantages of copula function to solve the related issues, this paper proposes the mechanical system reliability model based on copula function. And makes a detailed research for the serial and parallel mechanical system model and gets their reliability function respectively. Finally, the application research is carried out for serial mechanical system reliability model to prove its validity by example. Using Copula theory to make mechanical system reliability modeling and its expectation, studying the distribution of the random variables (marginal distribution of the mechanical product’ life and associated structure of variables separately, can reduce the difficulty of multivariate probabilistic modeling and analysis to make the modeling and analysis process more clearly.

  10. Technology development of maintenance optimization and reliability analysis for safety features in nuclear power plants

    Kim, Tae Woon; Choi, Seong Soo; Lee, Dong Gue; Kim, Young Il


    The reliability data management system (RDMS) for safety systems of PHWR type plants has been developed and utilized in the reliability analysis of the special safety systems of Wolsong Unit 1,2 with plant overhaul period lengthened. The RDMS is developed for the periodic efficient reliability analysis of the safety systems of Wolsong Unit 1,2. In addition, this system provides the function of analyzing the effects on safety system unavailability if the test period of a test procedure changes as well as the function of optimizing the test periods of safety-related test procedures. The RDMS can be utilized in handling the requests of the regulatory institute actively with regard to the reliability validation of safety systems. (author)

  11. Tensile reliability analysis for gravity dam foundation surface based on FEM and response surface method

    Tong-chun LI; Dan-dan LI; Zhi-qiang WANG


    In this study,the limit state equation for tensile reliability analysis of the foundation surface of a gravity dam was established.The possible crack length was set as the action effect and allowable crack length was set as the resistance in the limit state.The nonlinear FEM was used to obtain the crack length of the foundation surface of the gravity dam,and the linear response surface method based on the orthogonal test design method was used to calculate the reliability,providing a reasonable and simple method for calculating the reliability of the serviceability limit state.The Longtan RCC gravity dam was chosen as an example.An orthogonal test,including eleven factors and two levels,was conducted,and the tensile reliability was calculated.The analysis shows that this method is reasonable.

  12. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    Duffy, Stephen F.; Palko, Joseph L.


    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  13. Latency Analysis of Systems with Multiple Interfaces for Ultra-Reliable M2M Communication

    Nielsen, Jimmy Jessen; Popovski, Petar


    One of the ways to satisfy the requirements of ultra-reliable low latency communication for mission critical Machine-type Communications (MTC) applications is to integrate multiple communication interfaces. In order to estimate the performance in terms of latency and reliability...... of such an integrated communication system, we propose an analysis framework that combines traditional reliability models with technology-specific latency probability distributions. In our proposed model we demonstrate how failure correlation between technologies can be taken into account. We show for the considered...

  14. Reliability evaluation and analysis of sugarcane 7000 series harvesters in sugarcane harvesting

    P Najafi


    hours were used. Usually, two methods are usedfor machine reliability modeling. The first is Pareto analysis and the second is statistical modeling of failure distributions (Barabadi and Kumar, 2007. For failures distribution modeling data need to be found, that are independent and identically (iid distributed or not. For this, trend test and serial correlation tests are used. If the data has a trend, those are not iid and its parameters are computed from the power law process. For the data that does not havea trend, serial correlation testare performed. If the correlation coefficient is less than 0.05 the data is not iid. Therefore, its parameters reach via branching poison process or other similar methods; if the correlation coefficient is more than 0.05, the data are iid. Therefore, the classical statistical methods will be used for reliability modeling. Trend test results are compared with statistical parameter. A test for serial correlation was also done by plotting the ith TBF against the (i-1th TBF, i ¼ 1; 2; . . . ; n: If the plotted points are randomly scattered without any pattern, it can be interpreted that there is no correlation in general among the TBFs data and the data is independent. To continue, one must choose as the best fit distribution for TBF data. Few tests can be used for best fit distribution that include chi squared test and Kolmogorov–Smirnov (K-S test. Chi squared test is not valid when the data are less than 50. Therefore, when the TBF data are less than 50, K-S test must be used. Hence, the K-S test can be used for each TBF data numbers. When the failure distribution has been determined, the reliability model may be computed by equation (2.Results and discussion: Results of trend analysis for TBF data of sugarcane harvester machines showed that the calculated statistics U for all machines was more than chi squared value that was extracted fromthe chi square table with 2 (n-1 degrees of freedom and 5 percent level of significance. Hence

  15. Reliability of the Functional Reach Test and the influence of anthropometric characteristics on test results in subjects with hemiparesis.

    Martins, Emerson Fachin; de Menezes, Lidiane Teles; de Sousa, Pedro Henrique Côrtes; de Araujo Barbosa, Paulo Henrique Ferreira; Costa, Abraão Souza


    First designed as an alternative method of assessing balance and susceptibility to falls among elderly, the Functional Reach Test (FR) has also been used among patients with hemiparesis. Then this study aimed to describe the intra- and inter-rater and the test/re-test reliability of the FR measure in subjects with and without hemiparesis while verifying anthropometric influences on the measurements. The FR was administered to a sample of subjects with hemiparesis and to a control group that was matched by gender and age. A two-way analysis of variance was used to verify the intra-rater reliability. It was calculated using the differences between the averages of the measures obtained during single, double or triple trials. The intra-class correlation coefficient (ICC) was utilized and data plotted using the Bland-Altman method. Associations were analyzed using Pearson's correlation coefficient. In general, the intra-rater analysis did not show significant differences between the measures for the single, double or triple trials. Excellent ICC values were observed, and there were no significant associations with anthropometric parameters for the hemiparesis and control subjects. FR showed good reliability for patients with and without hemiparesis and the test measurements were not significantly associated with the anthropometric characteristics of the subjects.

  16. Reliability analysis of flood embankments taking into account a stochastic distribution of hydraulic loading

    Amabile Alessia


    Full Text Available Flooding is a worldwide phenomenon. Over the last few decades the world has experienced a rising number of devastating flood events and the trend in such natural disasters is increasing. Furthermore, escalations in both the probability and magnitude of flood hazards are expected as a result of climate change. Flood defence embankments are one of the major flood defence measures and reliability assessment for these structures is therefore a very important process. Routine hydro-mechanical models for the stability of flood embankments are based on the assumptions of steady-state through-flow and zero pore-pressures above the phreatic surface, i.e. negative capillary pressure (suction is ignored. Despite common belief, these assumptions may not always lead to conservative design. In addition, hydraulic loading is stochastic in nature and flood embankment stability should therefore be assessed in probabilistic terms. This cannot be accommodated by steady-state flow models. The paper presents an approach for reliability analysis of flood embankment taking into account the transient water through-flow. The factor of safety of the embankment is assessed in probabilistic terms based on a stochastic distribution for the hydraulic loading. Two different probabilistic approaches are tested to compare and validate the results.

  17. Reliability and Validity of Quantitative Video Analysis of Baseball Pitching Motion.

    Oyama, Sakiko; Sosa, Araceli; Campbell, Rebekah; Correa, Alexandra


    Video recordings are used to quantitatively analyze pitchers' techniques. However, reliability and validity of such analysis is unknown. The purpose of the study was to investigate the reliability and validity of joint and segment angles identified during a pitching motion using video analysis. Thirty high school baseball pitchers participated. The pitching motion was captured using 2 high-speed video cameras and a motion capture system. Two raters reviewed the videos to digitize the body segments to calculate 2-dimensional angles. The corresponding 3-dimensional angles were calculated from the motion capture data. Intrarater reliability, interrater reliability, and validity of the 2-dimensional angles were determined. The intrarater and interrater reliability of the 2-dimensional angles were high for most variables. The trunk contralateral flexion at maximum external rotation was the only variable with high validity. Trunk contralateral flexion at ball release, trunk forward flexion at foot contact and ball release, shoulder elevation angle at foot contact, and maximum shoulder external rotation had moderate validity. Two-dimensional angles at the shoulder, elbow, and trunk could be measured with high reliability. However, the angles are not necessarily anatomically correct, and thus use of quantitative video analysis should be limited to angles that can be measured with good validity.

  18. Stochastic data-flow graph models for the reliability analysis of communication networks and computer systems

    Chen, D.J.


    The literature is abundant with combinatorial reliability analysis of communication networks and fault-tolerant computer systems. However, it is very difficult to formulate reliability indexes using combinatorial methods. These limitations have led to the development of time-dependent reliability analysis using stochastic processes. In this research, time-dependent reliability-analysis techniques using Dataflow Graphs (DGF) are developed. The chief advantages of DFG models over other models are their compactness, structural correspondence with the systems, and general amenability to direct interpretation. This makes the verification of the correspondence of the data-flow graph representation to the actual system possible. Several DGF models are developed and used to analyze the reliability of communication networks and computer systems. Specifically, Stochastic Dataflow graphs (SDFG), both the discrete-time and the continuous time models are developed and used to compute time-dependent reliability of communication networks and computer systems. The repair and coverage phenomenon of communication networks is also analyzed using SDFG models.

  19. Multiobject Reliability Analysis of Turbine Blisk with Multidiscipline under Multiphysical Field Interaction

    Chun-Yi Zhang


    Full Text Available To study accurately the influence of the deformation, stress, and strain of turbine blisk on the performance of aeroengine, the comprehensive reliability analysis of turbine blisk with multiple disciplines and multiple objects was performed based on multiple response surface method (MRSM and fluid-thermal-solid coupling technique. Firstly, the basic thought of MRSM was introduced. And then the mathematical model of MRSM was established with quadratic polynomial. Finally, the multiple reliability analyses of deformation, stress, and strain of turbine blisk were completed under multiphysical field coupling by the MRSM, and the comprehensive performance of turbine blisk was evaluated. From the reliability analysis, it is demonstrated that the reliability degrees of the deformation, stress, and strain for turbine blisk are 0.9942, 0.9935, 0.9954, and 0.9919, respectively, when the allowable deformation, stress, and strain are 3.7 × 10−3 m, 1.07 × 109 Pa, and 1.12 × 10−2 m/m, respectively; besides, the comprehensive reliability degree of turbine blisk is 0.9919, which basically satisfies the engineering requirement of aeroengine. The efforts of this paper provide a promising approach method for multidiscipline multiobject reliability analysis.

  20. Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency

    Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee


    Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

  1. Analysis of strain gage reliability in F-100 jet engine testing at NASA Lewis Research Center

    Holanda, R.


    A reliability analysis was performed on 64 strain gage systems mounted on the 3 rotor stages of the fan of a YF-100 engine. The strain gages were used in a 65 hour fan flutter research program which included about 5 hours of blade flutter. The analysis was part of a reliability improvement program. Eighty-four percent of the strain gages survived the test and performed satisfactorily. A post test analysis determined most failure causes. Five failures were caused by open circuits, three failed gages showed elevated circuit resistance, and one gage circuit was grounded. One failure was undetermined.

  2. Problems Related to Use of Some Terms in System Reliability Analysis

    Nadezda Hanusova


    Full Text Available The paper deals with problems of using dependability terms, defined in actual standard STN IEC 50 (191: International electrotechnical dictionary, chap. 191: Dependability and quality of service (1993, in a technical systems dependability analysis. The goal of the paper is to find a relation between terms introduced in the mentioned standard and used in the technical systems dependability analysis and rules and practices used in a system analysis of the system theory. Description of a part of the system life cycle related to reliability is used as a starting point. The part of a system life cycle is described by the state diagram and reliability relevant therms are assigned.

  3. Reliability analysis of onboard laser ranging systems for control systems by movement of spacecraft

    E. I. Starovoitov


    Full Text Available The purpose of this paper is to study and find the ways to improve the reliability of onboard laser ranging system (LRS used to control the spacecraft rendezvous and descent. The onboard LRS can be implemented with optical-mechanical scanner and without it. The paper analyses the key factors, which influence on the reliability of both LRS. Reliability of LRS is pretty much defined by the reliability of the laser source and its radiation mode. Solid-state diode-pumped lasers are primarily used as a radiation source. The radiation mode, which is defined by requirements for measurement errors of range and speed affect their reliability. The basic assumption is that the resource of solid state lasers is determined by the number pulses of pumping diodes. The paper investigates the influence of radiation mode of solid-state laser on the reliability function when measuring a passive spacecraft rendezvous dosing velocity using a differential method. With the measurement error, respectively, 10 m for range and 0.6 m/s for velocity a reliability function of 0.99 has been achieved. Reducing the measurement error of velocity to 0.5 m/s either results in reduced reliability function <0.99 or it is necessary to reduce the initial error of measurement range up to 3.5...5 m to correspond to the reliability function ≥ 0.995. For the optomechanical scanner-based LRS the maximum pulse repetition frequency versus the range has been obtained. This dependence has been used as a basis to define the reliability function. The paper investigates the influence of moving parts on the reliability of scanning LRS with sealed or unsealed optomechanical unit. As a result, it has been found that the exception of moving parts is justified provided that manufacturing the sealed optomechanical LRS unit is impossible. In this case, the reliability function increases from 0.99 to 0.9999. When sealing the opto-mechanical unit, the same increase in reliability is achieved through

  4. Content Analysis in Mass Communication: Assessment and Reporting of Intercoder Reliability.

    Lombard, Matthew; Snyder-Duch, Jennifer; Bracken, Cheryl Campanella


    Reviews the importance of intercoder agreement for content analysis in mass communication research. Describes several indices for calculating this type of reliability (varying in appropriateness, complexity, and apparent prevalence of use). Presents a content analysis of content analyses reported in communication journals to establish how…

  5. The reliability of the Canadian triage and acuity scale: Meta-analysis

    Amir Mirhaghi


    Full Text Available Background: Although the Canadian Triage and Acuity Scale (CTAS have been developed since two decades ago, the reliability of the CTAS has not been questioned comparing to moderating variable. Aims: The study was to provide a meta-analytic review of the reliability of the CTAS in order to reveal to what extent the CTAS is reliable. Materials and Methods: Electronic databases were searched to March 2014. Only studies were included that had reported samples size, reliability coefficients, adequate description of the CTAS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models and meta-regression was done based on method of moments estimator. Results: Fourteen studies were included. Pooled coefficient for the CTAS was substantial 0.672 (CI 95%: 0.599-0.735. Mistriage is less than 50%. Agreement upon the adult version, among nurse-physician and near countries is higher than pediatrics version, other raters and farther countries, respectively. Conclusion: The CTAS showed acceptable level of overall reliability in the emergency department but need more development to reach almost perfect agreement.

  6. Microgrid Design Analysis Using Technology Management Optimization and the Performance Reliability Model

    Stamp, Jason E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jensen, Richard P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Munoz-Ramos, Karina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There are two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie


    Joanna Zuchewicz


    The objective of the hereby paper is to indicate, on the one hand, the need for transformations in financial reporting as the basic source of information about the financial situation of an economic entity indispensable in the decision making process by its users, and on the other to provide the analysis of the adopted report-ing reconstruction directions validity, as suggested by international financial community. On the basis of comments and reservations presented by practitioners and the a...

  8. Reliability centered maintenance (RCM: quantitative analysis of an induction electric furnace

    Diego Santos Cerveira


    Full Text Available The purpose of this article is to define a maintenance strategy for an electric induction furnace, installed on a special steels Foundry. The research method was the quantitative modeling. The proposed method is based on Reliability-Centered Maintenance (RCM, applied to industrial equipment. Quantitative analysis of reliability, availability and maintainability were used as support the definition of the maintenance strategy of the equipment. For research, historical data were collected from time-to-repair (TTR and time between failures (TBF of the equipment under consideration. Supported by ProConf 2000 software, most appropriate distributions have been identified and modeled to TTR (lognormal and TBF (Weibull. With the results, availability of equipment Av = 98,18% and form factor g = 1 of the Weibull distribution were calculated. It was possible to find a position for the equipment on the bathtub curve, in the maturity phase and define the best maintenance strategy for this case, the predictive maintenance. Finally, the current strategy was discussed and development suggestions were presented to this strategy.


    姜年朝; 周光明; 张逊; 戴勇; 倪俊; 张志清


    A high cycle fatigue reliability analysis approach to helicopter rotor hub is proposed under working load spectrum .Automatic calculation for the approach is implemented through writing the calculating programs .In the system ,the modification of geometric model of rotor hub is controlled by several parameters ,and finite element method and S-N curve method are then employed to solve the fatigue life by automatically assigned parameters .A database between assigned parameters and fatigue life is obtained via Latin Hypercube Sampling (LHS) on toler-ance zone of rotor hub .Different data-fitting technologies are used and compared to determine a highest-precision approximation for this database .The parameters are assumed to be independent of each other and follow normal distributions .Fatigue reliability is then computed by the Monte Carlo (MC) method and the mean-value first order second moment (M FOSM ) method .Results show that the approach has high efficiency and precision ,and is suit-able for engineering application .

  10. Numerical simulation of soldered joints and reliability analysis of PLCC components with J-shape leads

    Zhang Liang; Xue Songbai; Lu Fangyan; Han Zongjie; Wang Jianxin


    This paper deals with a study on SnPb and lead-free soldered joint reliability of PLCC devices with different lead counts under three kinds of temperature cycle profiles, which is based on non-linear finite element method. By analyzing the stress of soldered joints, it is found that the largest stress is at the area between the soldered joints and the leads, and analysis results indicate that the von Mises stress at the location slightly increases with the increase of lead counts. For PLCC with 84 leads the soldered joints was modeled for three typical loading (273-398 K, 218-398 K and 198-398 K) in order to study the influence of acceleration factors on the reliability of soldered joints. And the estimation of equivalent plastic strain of three different lead-free solder alloys (Sn3.8Ag0.7Cu, Sn3.5Ag and Sn37Pb) was also carried out.

  11. Screen for child anxiety related emotional disorders: are subscale scores reliable? A bifactor model analysis.

    DeSousa, Diogo Araújo; Zibetti, Murilo Ricardo; Trentini, Clarissa Marceli; Koller, Silvia Helena; Manfro, Gisele Gus; Salum, Giovanni Abrahão


    The aim of this study was to investigate the utility of creating and scoring subscales for the self-report version of the Screen for Child Anxiety Related Emotional Disorders (SCARED) by examining whether subscale scores provide reliable information after accounting for a general anxiety factor in a bifactor model analysis. A total of 2420 children aged 9-18 answered the SCARED in their schools. Results suggested adequate fit of the bifactor model. The SCARED score variance was hardly influenced by the specific domains after controlling for the common variance in the general factor. The explained common variance (ECV) for the general factor was large (63.96%). After accounting for the general total score (ωh=.83), subscale scores provided very little reliable information (ωh ranged from .005 to .04). Practitioners that use the SCARED should be careful when scoring and interpreting the instrument subscales since there is more common variance to them than specific variance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Evaluating Proposed Investments in Power System Reliability and Resilience: Preliminary Results from Interviews with Public Utility Commission Staff

    LaCommare, Kristina [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Larsen, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Eto, Joseph [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)


    Policymakers and regulatory agencies are expressing renewed interest in the reliability and resilience of the U.S. electric power system in large part due to growing recognition of the challenges posed by climate change, extreme weather events, and other emerging threats. Unfortunately, there has been little or no consolidated information in the public domain describing how public utility/service commission (PUC) staff evaluate the economics of proposed investments in the resilience of the power system. Having more consolidated information would give policymakers a better understanding of how different state regulatory entities across the U.S. make economic decisions pertaining to reliability/resiliency. To help address this, Lawrence Berkeley National Laboratory (LBNL) was tasked by the U.S. Department of Energy Office of Energy Policy and Systems Analysis (EPSA) to conduct an initial set of interviews with PUC staff to learn more about how proposed utility investments in reliability/resilience are being evaluated from an economics perspective. LBNL conducted structured interviews in late May-early June 2016 with staff from the following PUCs: Washington D.C. (DCPSC), Florida (FPSC), and California (CPUC).

  13. Reliability reallocation models as a support tools in traffic safety analysis.

    Bačkalić, Svetlana; Jovanović, Dragan; Bačkalić, Todor


    One of the essential questions placed before a road authority is where to act first, i.e. which road sections should be treated in order to achieve the desired level of reliability of a particular road, while this is at the same time the subject of this research. The paper shows how the reliability reallocation theory can be applied in safety analysis of a road consisting of sections. The model has been successfully tested using two apportionment techniques - ARINC and the minimum effort algorithm. The given methods were applied in the traffic safety analysis as a basic step, for the purpose of achieving a higher level of reliability. The previous methods used for selecting hazardous locations do not provide precise values for the required frequency of accidents, i.e. the time period between the occurrences of two accidents. In other words, they do not allow for the establishment of a connection between a precise demand for increased reliability (expressed as a percentage) and the selection of particular road sections for further analysis. The paper shows that reallocation models can also be applied in road safety analysis, or more precisely, as part of the measures for increasing their level of safety. A tool has been developed for selecting road sections for treatment on the basis of a precisely defined increase in the level of reliability of a particular road, i.e. the mean time between the occurrences of two accidents.

  14. Reliability analysis of supporting pressure in tunnels based on three-dimensional failure mechanism

    罗卫华; 李闻韬


    Based on nonlinear failure criterion, a three-dimensional failure mechanism of the possible collapse of deep tunnel is presented with limit analysis theory. Support pressure is taken into consideration in the virtual work equation performed under the upper bound theorem. It is necessary to point out that the properties of surrounding rock mass plays a vital role in the shape of collapsing rock mass. The first order reliability method and Monte Carlo simulation method are then employed to analyze the stability of presented mechanism. Different rock parameters are considered random variables to value the corresponding reliability index with an increasing applied support pressure. The reliability indexes calculated by two methods are in good agreement. Sensitivity analysis was performed and the influence of coefficient variation of rock parameters was discussed. It is shown that the tensile strength plays a much more important role in reliability index than dimensionless parameter, and that small changes occurring in the coefficient of variation would make great influence of reliability index. Thus, significant attention should be paid to the properties of surrounding rock mass and the applied support pressure to maintain the stability of tunnel can be determined for a given reliability index.

  15. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    Zio, Enrico


    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...




    A two-point adaptive nonlinear approximation (referred to as TANA4) suitable for reliability analysis is proposed. Transformed and normalized random variables in probabilistic analysis could become negative and pose a challenge to the earlier developed two-point approximations; thus a suitable method that can address this issue is needed. In the method proposed, the nonlinearity indices of intervening variables are limited to integers. Then, on the basis of the present method, an improved sequential approximation of the limit state surface for reliability analysis is presented. With the gradient projection method, the data points for the limit state surface approximation are selected on the original limit state surface, which effectively represents the nature of the original response function. On the basis of this new approximation, the reliability is estimated using a first-order second-moment method. Various examples, including both structural and non-structural ones, are presented to show the effectiveness of the method proposed.

  17. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes


    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.

  18. Structure buckling and non-probabilistic reliability analysis of supercavitating vehicles

    AN Wei-guang; ZHOU Ling; AN Hai


    To perform structure buckling and reliability analysis on supercavitating vehicles with high velocity in the submarine, supercavitating vehicles were simplified as variable cross section beam firstly. Then structural buckling analysis of supercavitating vehicles with or without engine thrust was conducted, and the structural buckling safety margin equation of supercavitating vehicles was established. The indefinite information was de-scribed by interval set and the structure reliability analysis was performed by using non-probabilistic reliability method. Considering interval variables as random variables which satisfy uniform distribution, the Monte-Carlo method was used to calculate the non-probabilistic failure degree. Numerical examples of supercavitating vehi-cles were presented. Under different ratios of base diameter to cavitator diameter, the change tendency of non-probabilistic failure degree of structural buckling of supereavitating vehicles with or without engine thrust was studied along with the variety of speed.

  19. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    Migneault, Gerard E.


    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  20. Reliability analysis of shoulder balance measures: comparison of the 4 available methods.

    Hong, Jae-Young; Suh, Seung-Woo; Yang, Jae-Hyuk; Park, Si-Young; Han, Ji-Hoon


    Observational study with 3 examiners. To compare the reliability of shoulder balance measurement methods. There are several measurement methods for shoulder balance. No reliability analysis has been performed despite the clinical importance of this measurement. Whole spine posteroanterior radiographs (n = 270) were collected to compare the reliability of the 4 shoulder balance measures in patients with adolescent idiopathic scoliosis. Each radiograph was measured twice by each of the 3 examiners using 4 measurement methods. The data were analyzed statistically to determine the inter- and intraobserver reliability. Overall, the 4 radiographical methods showed an excellent intraclass correlation coefficient regardless of severity in intraobserver comparisons (>0.904). In addition, the mean absolute difference values in all methods were low and were comparatively similar (0.445, mean absolute difference 0.810 and >0.787, respectively) regardless of severity. In addition, the mean absolute difference values in the clavicular angle method were lower (balance measurement method clinically. 3.

  1. Liquefaction of Tangier soils by using physically based reliability analysis modelling

    Dubujet P.


    Full Text Available Approaches that are widely used to characterize propensity of soils to liquefaction are mainly of empirical type. The potential of liquefaction is assessed by using correlation formulas that are based on field tests such as the standard and the cone penetration tests. These correlations depend however on the site where they were derived. In order to adapt them to other sites where seismic case histories are not available, further investigation is required. In this work, a rigorous one-dimensional modelling of the soil dynamics yielding liquefaction phenomenon is considered. Field tests consisting of core sampling and cone penetration testing were performed. They provided the necessary data for numerical simulations performed by using DeepSoil software package. Using reliability analysis, the probability of liquefaction was estimated and the obtained results were used to adapt Juang method to the particular case of sandy soils located in Tangier.

  2. Application of the Simulation Based Reliability Analysis on the LBB methodology

    Pečínka L.


    Full Text Available Guidelines on how to demonstrate the existence of Leak Before Break (LBB have been developed in many western countries. These guidelines, partly based on NUREG/CR-6765, define the steps that should be fulfilled to get a conservative assessment of LBB acceptability. As a complement and also to help identify the key parameters that influence the resulting leakage and failure probabilities, the application of Simulation Based Reliability Analysis is under development. The used methodology will be demonstrated on the assessment of through wall leakage crack stability according R6 method. R6 is a known engineering assessment procedure for the evaluation of the integrity of the flawed structure. Influence of thermal ageing and seismic event has been elaborate.

  3. Evaluation of reliability of Coats-Redfern method for kinetic analysis of non-isothermal TGA

    R. Ebrahimi-Kahrizsangi; M. H. Abbasi


    A critical examination was made on the reliability of kinetic parameters of nonisothermal thermoanalytical rate measurement by the widely applied Coats-Redfern(CR) equation. For this purpose, simulated TGA curves were made for reactions with different kinetic models, including chemical, diffusion (Janders) and mixed mechanism at different heating rates. The results show that, for reactions controlled kinetically by one mechanism, all solid state reaction models show linear trends by use of CR method and this method can not distinct the correct reaction model. For reactions with mixed mechanism, the CR method shows nonlinear trends and the reaction models and kinetic parameters can not be extracted from CR curves. The overall conclusion from this comparative appraisal of the characteristics of the CR approach to kinetic analysis of TGA data is that the CR approach is generally unsuitable for determination of kinetic parameters.


    J. S. Schroeder; R. W. Youngblood


    The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion. In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.

  5. Developing a highly reliable cae analysis model of the mechanisms that cause bolt loosening in automobiles

    Ken Hashimoto


    Full Text Available In this study, we developed a highly reliable CAE analysis model of the mechanisms that cause loosening of bolt fasteners, which has been a bottleneck in automobile development and design, using a technical element model for highly accurate CAE that we had previously developed, and verified its validity. Specifically, drawing on knowledge gained from our clarification of the mechanisms that cause loosening of bolt fasteners using actual machine tests, we conducted an accelerated bench test consisting of a threedimensional vibration load test of the loosening of bolt fasteners used in mounts and rear suspension arms, where interviews with personnel at an automaker indicated loosening was most pronounced, and reproduced actual machine tests with CAE analysis based on a technical element model for highly accurate CAE analysis. Based on these results, we were able to reproduce dynamic behavior in which larger screw pitches (lead angles lead to greater non-uniformity of surface pressure, particularly around the nut seating surface, causing loosening to occur in areas with the lowest surface pressure. Furthermore, we implemented highly accurate CAE analysis with no error (gap compared to actual machine tests.

  6. Reliability Analysis of Production System of Fully-Mechanized Face Based on Output Statistic

    CAI Qing-xiang; LI Nai-liang


    Production system of fully-mechanized face is a complicated system composed of human, machine and environment, meantime influenced by various random factors. Analyzing the reliability of system needs plentiful data by means of system faults statistic. Based on the viewpoint that shift output of fully-mechanized face is the result of various random factors' synthetical influence, the process of how to analyze its reliability was deduced by using probability theory, symbolic statistics theory and systematic reliability theory combined with the concrete case study in this paper. And it has been proved that this method is feasible and valuable.

  7. Implementation and Analysis of Probabilistic Methods for Gate-Level Circuit Reliability Estimation

    WANG Zhen; JIANG Jianhui; YANG Guang


    The development of VLSI technology results in the dramatically improvement of the performance of integrated circuits. However, it brings more challenges to the aspect of reliability. Integrated circuits become more susceptible to soft errors. Therefore, it is imperative to study the reliability of circuits under the soft error. This paper implements three probabilistic methods (two pass, error propagation probability, and probabilistic transfer matrix) for estimating gate-level circuit reliability on PC. The functions and performance of these methods are compared by experiments using ISCAS85 and 74-series circuits.

  8. Korean round-robin result for new international program to assess the reliability of emerging nondestructive techniques

    Kim, Kyung Cho; Kim, Jin Gyum; Kang, Sung Sik; Jhung, Myung Jo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)


    The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  9. Korean Round-Robin Tests Result for New International Program to Assess the Reliability of Emerging Nondestructive Techniques

    Kyung Cho Kim


    Full Text Available The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  10. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.


    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  11. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Nikabdullah, N. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia and Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Singh, S. S. K.; Alebrahim, R.; Azizi, M. A. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); K, Elwaleed A. [Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Noorani, M. S. M. [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia (Malaysia)


    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  12. [Influences of hostage posting on estimation of trustworthiness: the effects of voluntary posting and reliable results].

    Nakayachi, Kazuya; Watabe, Motoki


    This research examined the effects of providing a monitoring and self-sanctioning system, called "hostage posting" in economics, on the improvement of trustworthiness. We conducted two questionnaire-type experiments to compare the trust-improving effects among the three conditions, (a) a voluntary provision of a monitoring and self-sanction system by the manager, (b) an imposed provision, and (c) an achievement of satisfactory management without any types of provisions. Total of 561 undergraduate students participated in the experiments. Results revealed that perceived integrity and competence were improved to almost the same level in both conditions (a) and (c), whereas these were not improved in condition (b). Consistent with our previous research, these results showed that the voluntary hostage posting improved trustworthiness level as well as a good performance did. The estimation of necessity of the system, however, was not different across these conditions. The implications for management practice and directions for future research were discussed.

  13. Resazurin Live Cell Assay: Setup and Fine-Tuning for Reliable Cytotoxicity Results.

    Rodríguez-Corrales, José Á; Josan, Jatinder S


    In vitro cytotoxicity tests allow for fast and inexpensive screening of drug efficacy prior to in vivo studies. The resazurin assay (commercialized as Alamar Blue(®)) has been extensively utilized for this purpose in 2D and 3D cell cultures, and high-throughput screening. However, improper or lack of assay validation can generate unreliable results and limit reproducibility. Herein, we report a detailed protocol for the optimization of the resazurin assay to determine relevant analytical (limits of detection, quantification, and linear range) and biological (growth kinetics) parameters, and, thus, provide accurate cytotoxicity results. Fine-tuning of the resazurin assay will allow accurate and fast quantification of cytotoxicity for drug discovery. Unlike more complicated methods (e.g., mass spectrometry), this assay utilizes fluorescence spectroscopy and, thus, provides a less costly alternative to observe changes in the reductase proteome of the cells.

  14. Interobserver reliability in musculoskeletal ultrasonography: results from a "Teach the Teachers" rheumatologist course

    Naredo, ee.; Møller, I.; Moragues, C.


    : The shoulder, wrist/hand, ankle/foot, or knee of 24 patients with rheumatic diseases were evaluated by 23 musculoskeletal ultrasound experts from different European countries randomly assigned to six groups. The participants did not reach consensus on scanning method or diagnostic criteria before......, tendon lesions, bursitis, and power Doppler signal. Afterwards they compared the ultrasound findings and re-examined the patients together while discussing their results. RESULTS: Overall agreements were 91% for joint effusion/synovitis and tendon lesions, 87% for cortical abnormalities, 84......% for tenosynovitis, 83.5% for bursitis, and 83% for power Doppler signal; kappa values were good for the wrist/hand and knee (0.61 and 0.60) and fair for the shoulder and ankle/foot (0.50 and 0.54). The principal differences in scanning method and diagnostic criteria between experts were related to dynamic...

  15. Comprehensive Reliability Allocation Method for CNC Lathes Based on Cubic Transformed Functions of Failure Mode and Effects Analysis

    YANG Zhou; ZHU Yunpeng; REN Hongrui; ZHANG Yimin


    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  16. The struggle to find reliable results in exome sequencing data: filtering out Mendelian errors

    Patel, Zubin H.; Kottyan, Leah C.; Lazaro, Sara; Williams, Marc S.; Ledbetter, David H.; Tromp, hbGerard; Rupert, Andrew; Kohram, Mojtaba; Wagner, Michael; Husami, Ammar; Qian, Yaping; Valencia, C. Alexander; Zhang, Kejian; Hostetter, Margaret K.; Harley, John B.; Kaufman, Kenneth M.


    Next Generation Sequencing studies generate a large quantity of genetic data in a relatively cost and time efficient manner and provide an unprecedented opportunity to identify candidate causative variants that lead to disease phenotypes. A challenge to these studies is the generation of sequencing artifacts by current technologies. To identify and characterize the properties that distinguish false positive variants from true variants, we sequenced a child and both parents (one trio) using DNA isolated from three sources (blood, buccal cells, and saliva). The trio strategy allowed us to identify variants in the proband that could not have been inherited from the parents (Mendelian errors) and would most likely indicate sequencing artifacts. Quality control measurements were examined and three measurements were found to identify the greatest number of Mendelian errors. These included read depth, genotype quality score, and alternate allele ratio. Filtering the variants on these measurements removed ~95% of the Mendelian errors while retaining 80% of the called variants. These filters were applied independently. After filtering, the concordance between identical samples isolated from different sources was 99.99% as compared to 87% before filtering. This high concordance suggests that different sources of DNA can be used in trio studies without affecting the ability to identify causative polymorphisms. To facilitate analysis of next generation sequencing data, we developed the Cincinnati Analytical Suite for Sequencing Informatics (CASSI) to store sequencing files, metadata (eg. relatedness information), file versioning, data filtering, variant annotation, and identify candidate causative polymorphisms that follow either de novo, rare recessive homozygous or compound heterozygous inheritance models. We conclude the data cleaning process improves the signal to noise ratio in terms of variants and facilitates the identification of candidate disease causative

  17. The struggle to find reliable results in exome sequencing data: filtering out Mendelian errors.

    Patel, Zubin H; Kottyan, Leah C; Lazaro, Sara; Williams, Marc S; Ledbetter, David H; Tromp, Hbgerard; Rupert, Andrew; Kohram, Mojtaba; Wagner, Michael; Husami, Ammar; Qian, Yaping; Valencia, C Alexander; Zhang, Kejian; Hostetter, Margaret K; Harley, John B; Kaufman, Kenneth M


    Next Generation Sequencing studies generate a large quantity of genetic data in a relatively cost and time efficient manner and provide an unprecedented opportunity to identify candidate causative variants that lead to disease phenotypes. A challenge to these studies is the generation of sequencing artifacts by current technologies. To identify and characterize the properties that distinguish false positive variants from true variants, we sequenced a child and both parents (one trio) using DNA isolated from three sources (blood, buccal cells, and saliva). The trio strategy allowed us to identify variants in the proband that could not have been inherited from the parents (Mendelian errors) and would most likely indicate sequencing artifacts. Quality control measurements were examined and three measurements were found to identify the greatest number of Mendelian errors. These included read depth, genotype quality score, and alternate allele ratio. Filtering the variants on these measurements removed ~95% of the Mendelian errors while retaining 80% of the called variants. These filters were applied independently. After filtering, the concordance between identical samples isolated from different sources was 99.99% as compared to 87% before filtering. This high concordance suggests that different sources of DNA can be used in trio studies without affecting the ability to identify causative polymorphisms. To facilitate analysis of next generation sequencing data, we developed the Cincinnati Analytical Suite for Sequencing Informatics (CASSI) to store sequencing files, metadata (eg. relatedness information), file versioning, data filtering, variant annotation, and identify candidate causative polymorphisms that follow either de novo, rare recessive homozygous or compound heterozygous inheritance models. We conclude the data cleaning process improves the signal to noise ratio in terms of variants and facilitates the identification of candidate disease causative

  18. The struggle to find reliable results in exome sequencing data: Filtering out Mendelian errors

    Zubin Hasmukh Patel


    Full Text Available Next Generation Sequencing studies generate a large quantity of genetic data in a relatively cost and time efficient manner and provide an unprecedented opportunity to identify candidate causative variants that lead to disease phenotypes. A challenge to these studies is the generation of sequencing artifacts by current technologies. To identify and characterize the properties that distinguish false positive variants from true variants, we sequenced a child and both parents (trio using DNA isolated from three sources (blood, buccal cells, and saliva. The trio strategy allowed us to identify variants in the proband that could not have been inherited from the parents (Mendelian errors and would most likely indicate sequencing artifacts. Quality control measurements were examined and three measurements were found to identify the greatest number of Mendelian errors. These included read depth, genotype quality score, and alternate allele ratio. Filtering the variants on these measurements removed ~95% of the Mendelian errors while retaining 80% of the called variants. These filters were applied independently. After filtering, the concordance between identical samples isolated from different sources was 99.99% as compared to 87% before filtering. This high concordance suggests that different sources of DNA can be used in trio studies without affecting the ability to identify causative polymorphisms. To facilitate analysis of next generation sequencing data, we developed the Cincinnati Analytical Suite for Sequencing Informatics (CASSI to store sequencing files, metadata (e.g. relatedness information, file versioning, data filtering, variant annotation, and identify candidate causative polymorphisms that follow either de novo, rare recessive homozygous or compound heterozygous inheritance models. We conclude the data cleaning process improves the signal to noise ratio in terms of variants and facilitates the identification of candidate disease causative

  19. Reliability improvements on Thales RM2 rotary Stirling coolers: analysis and methodology

    Cauquil, J. M.; Seguineau, C.; Martin, J.-Y.; Benschop, T.


    The cooled IR detectors are used in a wide range of applications. Most of the time, the cryocoolers are one of the components dimensioning the lifetime of the system. The cooler reliability is thus one of its most important parameters. This parameter has to increase to answer market needs. To do this, the data for identifying the weakest element determining cooler reliability has to be collected. Yet, data collection based on field are hardly usable due to lack of informations. A method for identifying the improvement in reliability has then to be set up which can be used even without field return. This paper will describe the method followed by Thales Cryogénie SAS to reach such a result. First, a database was built from extensive expertizes of RM2 failures occurring in accelerate ageing. Failure modes have then been identified and corrective actions achieved. Besides this, a hierarchical organization of the functions of the cooler has been done with regard to the potential increase of its efficiency. Specific changes have been introduced on the functions most likely to impact efficiency. The link between efficiency and reliability will be described in this paper. The work on the two axes - weak spots for cooler reliability and efficiency - permitted us to increase in a drastic way the MTTF of the RM2 cooler. Huge improvements in RM2 reliability are actually proven by both field return and reliability monitoring. These figures will be discussed in the paper.

  20. Aviation Fuel System Reliability and Fail-Safety Analysis. Promising Alternative Ways for Improving the Fuel System Reliability

    I. S. Shumilov


    Full Text Available The paper deals with design requirements for an aviation fuel system (AFS, AFS basic design requirements, reliability, and design precautions to avoid AFS failure. Compares the reliability and fail-safety of AFS and aircraft hydraulic system (AHS, considers the promising alternative ways to raise reliability of fuel systems, as well as elaborates recommendations to improve reliability of the pipeline system components and pipeline systems, in general, based on the selection of design solutions.It is extremely advisable to design the AFS and AHS in accordance with Aviation Regulations АП25 and Accident Prevention Guidelines, ICAO (International Civil Aviation Association, which will reduce risk of emergency situations, and in some cases even avoid heavy disasters.ATS and AHS designs should be based on the uniform principles to ensure the highest reliability and safety. However, currently, this principle is not enough kept, and AFS looses in reliability and fail-safety as compared with AHS. When there are the examined failures (single and their combinations the guidelines to ensure the AFS efficiency should be the same as those of norm-adopted in the Regulations АП25 for AHS. This will significantly increase reliability and fail-safety of the fuel systems and aircraft flights, in general, despite a slight increase in AFS mass.The proposed improvements through the use of components redundancy of the fuel system will greatly raise reliability of the fuel system of a passenger aircraft, which will, without serious consequences for the flight, withstand up to 2 failures, its reliability and fail-safety design will be similar to those of the AHS, however, above improvement measures will lead to a slightly increasing total mass of the fuel system.It is advisable to set a second pump on the engine in parallel with the first one. It will run in case the first one fails for some reasons. The second pump, like the first pump, can be driven from the

  1. Analysis of the Kinematic Accuracy Reliability of a 3-DOF Parallel Robot Manipulator

    Guohua Cui


    Full Text Available Kinematic accuracy reliability is an important performance index in the evaluation of mechanism quality. By using a 3- DOF 3-PUU parallel robot manipulator as the research object, the position and orientation error model was derived by mapping the relation between the input and output of the mechanism. Three error sensitivity indexes that evaluate the kinematic accuracy of the parallel robot manipulator were obtained by adapting the singular value decomposition of the error translation matrix. Considering the influence of controllable and uncontrollable factors on the kinematic accuracy, the mathematical model of reliability based on random probability was employed. The measurement and calculation method for the evaluation of the mechanism’s kinematic reliability level was also provided. By analysing the mechanism’s errors and reliability, the law of surface error sensitivity for the location and structure parameters was obtained. The kinematic reliability of the parallel robot manipulator was statistically computed on the basis of the Monte Carlo simulation method. The reliability analysis of kinematic accuracy provides a theoretical basis for design optimization and error compensation.

  2. Markov Chain Modelling of Reliability Analysis and Prediction under Mixed Mode Loading

    SINGH Salvinder; ABDULLAH Shahrum; NIK MOHAMED Nik Abdullah; MOHD NOORANI Mohd Salmi


    The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

  3. Reliability Analysis of Distributed Grid-connected Photovoltaic System Monitoring Network

    Fu Zhixin


    Full Text Available A large amount of distributed grid-connected Photovoltaic systems have brought new challenges to the dispatching of power network. Real-time monitoring the PV system can efficiently help improve the ability of power network to accept and control the distributed PV systems, and thus mitigate the impulse on the power network imposed by the uncertainty of its power output. To study the reliability of distributed PV monitoring network, it is of great significance to look for a method to build a highly reliable monitoring system, analyze the weak links and key nodes of its monitoring performance in improving the performance of the monitoring network. Firstly a reliability model of PV system was constructed based on WSN technology. Then, in view of the dynamic characteristics of the network’s reliability, fault tree analysis was used to judge any possible reasons that cause the failure of the network and logical relationship between them. Finally, the reliability of the monitoring network was analyzed to figure out the weak links and key nodes. This paper provides guidance to build a stable and reliable monitoring network of a distributed PV system.

  4. Reduced Expanding Load Method for Simulation-Based Structural System Reliability Analysis

    远方; 宋丽娜; 方江生


    The current situation and difficulties of the structural system reliability analysis are mentioned. Then on the basis of Monte Carlo method and computer simulation, a new analysis method reduced expanding load method ( RELM ) is presented, which can be used to solve structural reliability problems effectively and conveniently. In this method, the uncertainties of loads, structural material properties and dimensions can be fully considered. If the statistic parameters of stochastic variables are known, by using this method, the probability of failure can be estimated rather accurately. In contrast with traditional approaches, RELM method gives a much better understanding of structural failure frequency and its reliability indexβ is more meaningful. To illustrate this new idea, a specific example is given.

  5. Reliable enumeration of malaria parasites in thick blood films using digital image analysis

    Frean John A


    Full Text Available Abstract Background Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Methods Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. Results A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Conclusion Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that

  6. Assessment of the Reliability ofFractionator Column of the Kaduna Refinery using Failure Modes Effects and Criticality Analysis (FMECA

    Ibrahim A


    Full Text Available —The reliability of a process equipment is the probability that an item will perform a required function under stated condition(s. It is an important issue in any process industry. Failure to assess the reliability of most process equipment had led to huge financial losses. As a result, this research aims at assessing the reliability of the Fractionator column of the Kaduna Refining and Petrochemicals (KRPC, Fluid Catalytic Cracking Unit (FCCU, using the Failure mode, effects and criticality analysis (FMECA. The failure mode effects analysis (FMEA was firstused to identify failure modes, mechanisms, cause, effects severity of the fractionator column through its fourteen (14 sub-units(fractionator primary condenser, bottoms product cooler, debutanizer oil condenser, main fractionator, main fractionator oil drum, main fractionator reflux drum, heavy naphtha exchanger, heavy cycle oil exchanger, bottoms exchanger, BFW heater, steam generator, stripper reboiler, debutanizer reboiler, top reflux pumps. Both quantitative and qualitative criticality analyses (CA were used to determine the effectiveness and reliability of the unit (Fractionator column. For the qualitative analysis, items risk priority number (RPN were computed and it was found that, six (6 of the sub-units (feed/ main fractionator bottoms exchanger, main fractionator reflux drum, main fractionator bottoms pumps, feed/ heavy naphtha exchanger, main fractionator, and main fractionator bottoms/BFW heater had their RPN>300, with feed/ main fractionator bottoms exchanger having the highest RPN of 460. For the quantitative analysis, items criticality number (Cr were computed and it was found that most of the sub-units had their Cr>0.002. In addition, the results of the criticality matrix showed that, fifteen (15 out of the twenty nine (29 failure modes identifiedwere above or closely below the criticality line. Therefore, the effectiveness and reliability of the unit is low. As such, sub

  7. The reliability and validity of video analysis for the assessment of the clinical signs of concussion in Australian football.

    Makdissi, Michael; Davis, Gavin


    The objective of this study was to determine the reliability and validity of identifying clinical signs of concussion using video analysis in Australian football. Prospective cohort study. All impacts and collisions potentially resulting in a concussion were identified during 2012 and 2013 Australian Football League seasons. Consensus definitions were developed for clinical signs associated with concussion. For intra- and inter-rater reliability analysis, two experienced clinicians independently assessed 102 randomly selected videos on two occasions. Sensitivity, specificity, positive and negative predictive values were calculated based on the diagnosis provided by team medical staff. 212 incidents resulting in possible concussion were identified in 414 Australian Football League games. The intra-rater reliability of the video-based identification of signs associated with concussion was good to excellent. Inter-rater reliability was good to excellent for impact seizure, slow to get up, motor incoordination, ragdoll appearance (2 of 4 analyses), clutching at head and facial injury. Inter-rater reliability for loss of responsiveness and blank and vacant look was only fair and did not reach statistical significance. The feature with the highest sensitivity was slow to get up (87%), but this sign had a low specificity (19%). Other video signs had a high specificity but low sensitivity. Blank and vacant look (100%) and motor incoordination (81%) had the highest positive predictive value. Video analysis may be a useful adjunct to the side-line assessment of a possible concussion. Video analysis however should not replace the need for a thorough multimodal clinical assessment. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  8. Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report

    Authen, S.; Larsson, J. (Risk Pilot AB, Stockholm (Sweden)); Bjoerkman, K.; Holmberg, J.-E. (VTT, Helsingfors (Finland))


    Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)

  9. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    Fagundo, Arturo


    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  10. Reliability analysis for the facility data acquisition interface system upgrade at TA-55

    Turner, W.J.; Pope, N.G.; Brown, R.E.


    Because replacement parts for the existing facility data acquisition interface system at TA-55 have become scarce and are no longer being manufactured, reliability studies were conducted to assess various possible replacement systems. A new control system, based on Allen-Bradley Programmable Logic Controllers (PLCs), was found to have a likely reliability 10 times that of the present system, if the existing Continuous Air Monitors (CAMS) were used. Replacement of the old CAMs with new CAMs will result in even greater reliability as these are gradually phased in. The new PLC-based system would provide for hot standby processors, redundant communications paths, and redundant power supplies, and would be expandable and easily maintained, as well as much more reliable. TA-55 is the Plutonium Processing Facility which processes and recovers Pu-239 from scrap materials.

  11. Human reliability analysis of errors of commission: a review of methods and applications

    Reer, B


    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  12. Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

    Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick


    The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts

  13. Secondary Analysis for Results Tracking Database

    US Agency for International Development — The Secondary Analysis and Results Tracking (SART) activity provides support for the development of two databases to manage secondary and third-party data, data...

  14. A continuous-time Bayesian network reliability modeling and analysis framework

    Boudali, H.; Dugan, J.B.


    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  15. Reliability of ^1^H NMR analysis for assessment of lipid oxidation at frying temperatures

    The reliability of a method using ^1^H NMR analysis for assessment of oil oxidation at a frying temperature was examined. During heating and frying at 180 °C, changes of soybean oil signals in the ^1^H NMR spectrum including olefinic (5.16-5.30 ppm), bisallylic (2.70-2.88 ppm), and allylic (1.94-2.1...

  16. A continuous-time Bayesian network reliability modeling and analysis framework

    Boudali, H.; Dugan, J.B.


    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  17. Tampa Scale of Kinesiophobia for Heart Turkish Version Study: cross-cultural adaptation, exploratory factor analysis, and reliability

    Acar S


    Full Text Available Serap Acar,1 Sema Savci,1 Pembe Keskinoğlu,2 Bahri Akdeniz,3 Ebru Özpelit,3 Buse Özcan Kahraman,1 Didem Karadibak,1 Can Sevinc4 1School of Physical Therapy and Rehabilitation, 2Department of Biostatistics, Faculty of Medicine, 3Department of Cardiology, Faculty of Medicine, 4Department of Chest Disease, Faculty of Medicine, Dokuz Eylul University, İzmir, Turkey Purpose: Individuals with cardiac problems avoid physical activity and exercise because they expect to feel shortness of breath, dizziness, or chest pain. Assessing kinesiophobia related to heart problems is important in terms of cardiac rehabilitation. The Tampa Scale of Kinesiophobia Swedish Version for the Heart (TSK-SV Heart is reliable and has been validated for cardiac diseases in the Swedish population. The aim of this study was to investigate the reliability, parallel-form validity, and exploratory factor analysis of the TSK for the Heart Turkish Version (TSK Heart Turkish Version for evaluating kinesiophobia in patients with heart failure and pulmonary arterial hypertension.Methods: This cross-sectional study involved translation, back translation, and cross-cultural adaptation (localization. Forty-three pulmonary arterial hypertension and 32 heart failure patients were evaluated using the TSK Heart Turkish Version. The 17-item scale, originally composed for the Swedish population, has four factors: perceived danger for heart problem, avoidance of exercise, fear of injury, and dysfunctional self. Cronbach’s alpha (internal ­consistency and exploratory factor analysis were used to assess the questionnaire’s reliability. Results of the patients in the 6-minute walk test, International Physical Activity Questionnaire, and Nottingham Health Profile were analyzed by Pearson’s correlation analysis with the TSK Heart Turkish Version to indicate the convergent validity.Results: Cronbach’s alpha for the TSK Heart Turkish Version was 0.75, indicating acceptable internal

  18. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

    Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.


    Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

  19. Human reliability analysis of the Tehran research reactor using the SPAR-H method

    Barati Ramin


    Full Text Available The purpose of this paper is to cover human reliability analysis of the Tehran research reactor using an appropriate method for the representation of human failure probabilities. In the present work, the technique for human error rate prediction and standardized plant analysis risk-human reliability methods have been utilized to quantify different categories of human errors, applied extensively to nuclear power plants. Human reliability analysis is, indeed, an integral and significant part of probabilistic safety analysis studies, without it probabilistic safety analysis would not be a systematic and complete representation of actual plant risks. In addition, possible human errors in research reactors constitute a significant part of the associated risk of such installations and including them in a probabilistic safety analysis for such facilities is a complicated issue. Standardized plant analysis risk-human can be used to address these concerns; it is a well-documented and systematic human reliability analysis system with tables for human performance choices prepared in consultation with experts in the domain. In this method, performance shaping factors are selected via tables, human action dependencies are accounted for, and the method is well designed for the intended use. In this study, in consultations with reactor operators, human errors are identified and adequate performance shaping factors are assigned to produce proper human failure probabilities. Our importance analysis has revealed that human action contained in the possibility of an external object falling on the reactor core are the most significant human errors concerning the Tehran research reactor to be considered in reactor emergency operating procedures and operator training programs aimed at improving reactor safety.

  20. Saddlepoint approximation based structural reliability analysis with non-normal random variables


    The saddlepoint approximation (SA) can directly estimate the probability distribution of linear performance function in non-normal variables space. Based on the property of SA, three SA based methods are developed for the structural system reliability analysis. The first method is SA based reliability bounds theory (RBT), in which SA is employed to estimate failure probability and equivalent normal reliability index for each failure mode firstly, and then RBT is employed to obtain the upper and the lower bounds of system failure probability. The second method is SA based Nataf approximation, in which SA is used to estimate the probability density function (PDF) and cumulative distribution function (CDF) for the approximately linearized performance function of each failure mode. After the PDF of each failure mode and the correlation coefficients among approximately linearized performance functions are estimated, Nataf distribution is employed to approximate the joint PDF of multiple structural system performance functions, and then the system failure probability can be estimated directly by numerical simulation using the joint PDF. The third method is SA based line sampling (LS). The standardization transformation is needed to eliminate the dimensions of variables firstly in this case. Then LS method can express the system failure probability as an arithmetic average of a set of failure probabilities of the linear performance functions, and the probabilities of the linear performance functions can be estimated by the SA in the non-normal variables space. By comparing basic concepts, implementations and results of illustrations, the following conclusions can be drawn: (1) The first method can only obtain the bounds of system failure probability and it is only acceptable for the linear limit state function; (2) the second method can give the estimation of system failure probability, and its error mostly results from the approximation of Nataf distribution for the

  1. An efficient hybrid reliability analysis method with random and interval variables

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping


    Random and interval variables often coexist. Interval variables make reliability analysis much more computationally intensive. This work develops a new hybrid reliability analysis method so that the probability analysis (PA) loop and interval analysis (IA) loop are decomposed into two separate loops. An efficient PA algorithm is employed, and a new efficient IA method is developed. The new IA method consists of two stages. The first stage is for monotonic limit-state functions. If the limit-state function is not monotonic, the second stage is triggered. In the second stage, the limit-state function is sequentially approximated with a second order form, and the gradient projection method is applied to solve the extreme responses of the limit-state function with respect to the interval variables. The efficiency and accuracy of the proposed method are demonstrated by three examples.

  2. Reliability Analysis of Piezoelectric Truss Structures Under Joint Action of Electric and Mechanical Loading

    YANG Duo-he; AN Wei-guang; ZHU Rong-rong; MIAO Han


    Based on the finite element method(FEM) for the dynamical analysis of piezoelectric truss structures, the expressions of safety margins of strength fracture and damage electric field in the structure element are given considering electromechanical coupling effect under the joint action of electric and mechanical load. By importing the stochastic FEM,reliability of piezoelectric truss structures is analyzed by solving for partial derivative in the process of solving dynamical response of structure system with mode-superposition method. The influence of electromechanical coupling effect to reliability index is then analyzed through an example.

  3. Fuzzy Fatigue Reliability Analysis of Offshore Platforms in Ice-Infested Waters

    方华灿; 段梦兰; 贾星兰; 谢彬


    The calculation of fatigue stress ranges due to random waves and ice loads on offshore structures is discussed, and the corresponding accumulative fatigue damages of the structural members are evaluated. To evaluate the fatigue damage to the structures more accurately, the Miner rule is modified considering the fuzziness of the concerned parameters, and a new model for fuzzy fatigue reliability analysis of offshore structures members is developed. Furthermore, an assessment method for predicting the dynamics of the fuzzy fatigue reliability of structural members is provided.

  4. Tensile reliability analysis for gravity dam foundation surface based on FEM and response surface method

    Tong-chun LI; Li, Dan-Dan; Wang, Zhi-Qiang


    In the paper, the limit state equation of tensile reliability of foundation base of gravity dam is established. The possible crack length is set as action effect and the allowance crack length is set as resistance in this limit state. The nonlinear FEM is applied to obtain the crack length of foundation base of gravity dam, and linear response surface method based on the orthogonal test design method is used to calculate the reliability,which offered an reasonable and simple analysis method t...

  5. Reliability assessment of different plate theories for elastic wave propagation analysis in functionally graded plates.

    Mehrkash, Milad; Azhari, Mojtaba; Mirdamadi, Hamid Reza


    The importance of elastic wave propagation problem in plates arises from the application of ultrasonic elastic waves in non-destructive evaluation of plate-like structures. However, precise study and analysis of acoustic guided waves especially in non-homogeneous waveguides such as functionally graded plates are so complicated that exact elastodynamic methods are rarely employed in practical applications. Thus, the simple approximate plate theories have attracted much interest for the calculation of wave fields in FGM plates. Therefore, in the current research, the classical plate theory (CPT), first-order shear deformation theory (FSDT) and third-order shear deformation theory (TSDT) are used to obtain the transient responses of flexural waves in FGM plates subjected to transverse impulsive loadings. Moreover, comparing the results with those based on a well recognized hybrid numerical method (HNM), we examine the accuracy of the plate theories for several plates of various thicknesses under excitations of different frequencies. The material properties of the plate are assumed to vary across the plate thickness according to a simple power-law distribution in terms of volume fractions of constituents. In all analyses, spatial Fourier transform together with modal analysis are applied to compute displacement responses of the plates. A comparison of the results demonstrates the reliability ranges of the approximate plate theories for elastic wave propagation analysis in FGM plates. Furthermore, based on various examples, it is shown that whenever the plate theories are used within the appropriate ranges of plate thickness and frequency content, solution process in wave number-time domain based on modal analysis approach is not only sufficient but also efficient for finding the transient waveforms in FGM plates.

  6. Dynamic Reliability Analysis Method of Degraded Mechanical Components Based on Process Probability Density Function of Stress

    Peng Gao


    Full Text Available It is necessary to develop dynamic reliability models when considering strength degradation of mechanical components. Instant probability density function (IPDF of stress and process probability density function (PPDF of stress, which are obtained via different statistical methods, are defined, respectively. In practical engineering, the probability density function (PDF for the usage of mechanical components is mostly PPDF, such as the PDF acquired via the rain flow counting method. For the convenience of application, IPDF is always approximated by PPDF when using the existing dynamic reliability models. However, it may cause errors in the reliability calculation due to the approximation of IPDF by PPDF. Therefore, dynamic reliability models directly based on PPDF of stress are developed in this paper. Furthermore, the proposed models can be used for reliability assessment in the case of small amount of stress process samples by employing the fuzzy set theory. In addition, the mechanical components in solar array of satellites are chosen as representative examples to illustrate the proposed models. The results show that errors are caused because of the approximation of IPDF by PPDF and the proposed models are accurate in the reliability computation.

  7. Inter-Rater Reliability of Preprocessing EEG Data: Impact of Subjective Artifact Removal on Associative Memory Task ERP Results

    Steven D. Shirk


    Full Text Available The processing of EEG data routinely involves subjective removal of artifacts during a preprocessing stage. Preprocessing inter-rater reliability (IRR and how differences in preprocessing may affect outcomes of primary event-related potential (ERP analyses has not been previously assessed. Three raters independently preprocessed EEG data of 16 cognitively healthy adult participants (ages 18–39 years who performed a memory task. Using intraclass correlations (ICCs, IRR was assessed for Early-frontal, Late-frontal, and Parietal Old/new memory effects contrasts across eight regions of interest (ROIs. IRR was good to excellent for all ROIs; 22 of 26 ICCs were above 0.80. Raters were highly consistent in preprocessing across ROIs, although the frontal pole ROI (ICC range 0.60–0.90 showed less consistency. Old/new parietal effects had highest ICCs with the lowest variability. Rater preprocessing differences did not alter primary ERP results. IRR for EEG preprocessing was good to excellent, and subjective rater-removal of EEG artifacts did not alter primary memory-task ERP results. Findings provide preliminary support for robustness of cognitive/memory task-related ERP results against significant inter-rater preprocessing variability and suggest reliability of EEG to assess cognitive-neurophysiological processes multiple preprocessors are involved.

  8. A Study on Management Techniques of Power Telecommunication System by Reliability Analysis

    Lee, B.K.; Lee, B.S.; Woy, Y.H.; Oh, M.T.; Shin, M.T.; Kwan, O.G. [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center; Kim, K.H.; Kim, Y.H.; Lee, W.T.; Park, Y.H.; Lee, J.J.; Park, H.S.; Choi, M.C.; Kim, J. [Korea Electrotechnology Research Inst., Changwon (Korea, Republic of)


    Power telecommunication network is being increased rapidly in that expansion of power facilities according to the growth of electric power supply. The requirement of power facility and office automation and importance of communication services make it to complex and confusing to operate. And, for the sake of correspond to the change of power telecommunication network, effective operation and management is called for urgently. Therefore, the object of this study is to establish total reliability analysis system based on dependability, maintainability, cost effectiveness and replenishment for keep up reasonable reliability, support economical maintenance and reasonable planning of facility investment. And it will make effective management and administration system and schemes for total reliability improvement. (author). 44 refs., figs.

  9. Reliability Analysis of Component Software in Wireless Sensor Networks Based on Transformation of Testing Data

    Chunyan Hou


    Full Text Available We develop an approach of component software reliability analysis which includes the benefits of both time domain, and structure based approaches. This approach overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a method of transformation of testing data to cover both methods, and is expected to improve reliability prediction. This paradigm allows component-based software testing process doesn’t meet the assumption of NHPP models, and accounts for software structures by the way of modeling the testing process. According to the testing model it builds the mapping relation from the testing profile to the operational profile which enables the transformation of the testing data to build the reliability dataset required by NHPP models. At last an example is evaluated to validate and show the effectiveness of this approach.

  10. Reliability analysis for determining performance of barrage based on gates operation

    Adiningrum, C.; Hadihardaja, I. K.


    Some rivers located on a flat slope topography such as Cilemahabang river and Ciherang river in Cilemahabang watershed, Bekasi regency, West Java are susceptible to flooding. The inundation mostly happens near a barrage in the middle and downstream of the Cilemahabang watershed, namely the Cilemahabang and Caringin barrages. Barrages or gated weirs are difficult to exploit since the gate must be kept and operated properly under any circumstances. Therefore, a reliability analysis of the gates operation is necessary to determine the performance of the barrage with respect to the number of gates opened and the gates opening heights. The First Order Second Moment (FOSM) method was used to determine the performance by the reliability index (β) and the probability of failure (risk). It was found that for Cilemahabang Barrage, the number of gates opened with load (L) represents the peak discharge derived from various rainfall (P) respectively one gate with opening height (h=1m) for Preal, two gates (h=1m and h=1,5m) for P50, and three gates (each gate with h=2,5m) for P100. For Caringin Barrage, the results are minimum three gates opened (each gate with h=2,5 m) for Preal, five gates opened (each gate with h=2,5m) for P50, and six gates opened (each gate with h=2,5m) for P100. It can be concluded that a greater load (L) needs greater resistance (R) to counterbalance. Resistance can be added by increasing the number of gates opened and the gate opening height. A higher number of gates opened will lead to the decrease of water level in the upstream of barrage and less risk of overflow.

  11. Statistical Degradation Models for Reliability Analysis in Non-Destructive Testing

    Chetvertakova, E. S.; Chimitova, E. V.


    In this paper, we consider the application of the statistical degradation models for reliability analysis in non-destructive testing. Such models enable to estimate the reliability function (the dependence of non-failure probability on time) for the fixed critical level using the information of the degradation paths of tested items. The most widely used models are the gamma and Wiener degradation models, in which the gamma or normal distributions are assumed as the distribution of degradation increments, respectively. Using the computer simulation technique, we have analysed the accuracy of the reliability estimates, obtained for considered models. The number of increments can be enlarged by increasing the sample size (the number of tested items) or by increasing the frequency of measuring degradation. It has been shown, that the sample size has a greater influence on the accuracy of the reliability estimates in comparison with the measuring frequency. Moreover, it has been shown that another important factor, influencing the accuracy of reliability estimation, is the duration of observing degradation process.

  12. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    Yu, Bo; Ning, Chao-lie; Li, Bing


    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  13. Reliability analysis for the 220 kV Libyan high voltage communication system

    Saleh, O.S.A.; AlAthram, A.Y. [General Electric Company of Libya (Libyan Arab Jamahiriya). Development Dept.


    Electric utilities are expanding their networks to include fiber-optic communications, which offer high capacity with reliable performance at low cost. Fiber-optic networks offer a feasible technical solution for leasing excess capacity. They can be readily deployed under a wide range of network configurations and can be upgraded rapidly. This study evaluated the reliability index for the communication network of Libya's 220 kV high voltage subsystem operated by the General Electric Company of Libya (GECOL). The schematic diagrams of the communication networks were presented for both power line carriers and fiber optics networks. A reliability analysis for the two communication networks was performed through the existing communication equipment. The reliability values revealed that the fiber optics system has several advantages such as a large bandwidth for high quality data transmission; immunity to electromagnetic interference; low attenuation which allows for extended cable transmission; ability to be used in dangerous environments; a higher degree of security; and, a high capacity through existing conduits due to its light weight and small diameter. However, it was noted that although fiber optic communications may be more reliable and provide the clearest signal, the powerline communication (PLC) system has more redundancy, particularly in the case of outdoor components where the PLC has more power line to carry the signals, while the fiber optic communications depend only on the earthing wire of the high voltage transmission line. 4 refs., 8 tabs., 6 figs.

  14. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    桂劲松; 刘红; 康海贵


    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  15. Implications in adjusting a gravity network with observations medium or independent: analysis of precision and reliability

    Pedro L. Faggion


    Full Text Available Adjustment strategies associated to the methodology applied used to the implantation of a gravity network of high precision in Paraná are presented. A network was implanted with stations in 21 places in the State of Paraná and one in the state of São Paulo To reduce the risk of the losing of points of that gravity network, they were established on the points of the GPS High Precision Network of Paraná, which possess a relatively homogeneous geographical distribution. For each one of the gravity lines belonging to the loops implanted for the network, it was possible to obtain three or six observations. In the first strategy, of adjustment investigated, for the net, it was considered, as observation, the medium value of the observations obtained for each gravity line. In the second strategy, of the adjustment, the observations were considered independent. The comparison of those strategies revealed that only the precision criteria is not enough to indicate the great solution of a gravity network. It was verified that there is need to use an additional criterion for analysis of the adjusted solution of the network, besides the precision criteria. The reliability criterion for geodesic networks, which becomes separated in reliability internal and external reliability it was used. The reliability internal it was used to verify the rigidity with which the network reacts in the detection and quantification of existent gross errors in the observations, and the reliability external in the quantification of the influence on the adjusted parameters of the errors non located. They are presented the aspects that differentiate the obtained solutions, when they combine the precision criteria and reliability criteria in the analysis of the quality of a gravity network.

  16. Reliability of multiresolution deconvolution for improving depth resolution in SIMS analysis

    Boulakroune, M.'Hamed


    This paper deals the effectiveness and reliability of multiresolution deconvolution algorithm for recovery Secondary Ions Mass Spectrometry, SIMS, profiles altered by the measurement. This new algorithm is characterized as a regularized wavelet transform. It combines ideas from Tikhonov Miller regularization, wavelet analysis and deconvolution algorithms in order to benefit from the advantages of each. The SIMS profiles were obtained by analysis of two structures of boron in a silicon matrix using a Cameca-Ims6f instrument at oblique incidence. The first structure is large consisting of two distant wide boxes and the second one is thin structure containing ten delta-layers in which the deconvolution by zone was applied. It is shown that this new multiresolution algorithm gives best results. In particular, local application of the regularization parameter of blurred and estimated solutions at each resolution level provided to smoothed signals without creating artifacts related to noise content in the profile. This led to a significant improvement in the depth resolution and peaks' maximums.

  17. Using probabilistic analysis to assess the reliability of predicted SRB aft-skirt stresses

    Richardson, James A.


    Probabilistic failure analysis is a tool to predict the reliability of a part or system. Probabalistic techniques were used to predict critical stresses which occur in the solid rocket booster aft-skirt during main engine buildup, immediately prior to lift-off. More than any other hold down post (HDP) load component, the Z loads are sensitive to variations in strains and calibration constants. Also, predicted aft-skirt stresses are strongly affected by HDP load variations. Therefore, the instrumented HDP are not effective load transducers for Z loads, and, when used with aft skirt stress indicator equations, yield estimates with large uncertainty. Monte Carlo simulation proved to be a straight forward way of studying the overlapping effects of multiple parameters on predicted equipment performance. An advantage of probabilistic analysis is the degree of uncertainty of each parameter as stated explicitly by its probability distribution. It was noted, however, that the choice of parameter distribution had a large effect on the simulation results. Many times these distributions must be assumed. The engineer who is designing the part should be responsible for the choice of parameter distribution.

  18. U—Series Dating of Fossil Bones:Results from Chinese Sites and Discussions on Its Reliability



    Calculations,according to some open-system models,point out that while a statistically significant discrepancy between the results of two U-series methods,230Th/234U and 227Th/220Th(or 231Pa/235U),attests a relatively recent and important uranium migration,concordant dates cannot guarantee closes-system behavior of sample.The results of 20 fossil bones from 10 Chinese sites,19 of which are determined by two U-series methods,are given,Judging from independent age controls,8 out of the 11 concordant age sets are unacceptable,The results in this paper suggest that uranium may cycle into or out of fossil bones,such geochemical events may take place at any time and no known preserving condition may securely protect them from being affected.So for the sitew we have studied,the U-series dating of fossil bones is of limited reliability.

  19. Finite Element Reliability Analysis of Chloride Ingress into Reinforced Concrete Structures

    Frier, Christian; Sørensen, John Dalsgaard


    For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in maintenance and repair actions. Further, a reduction of the load-bearing capacity can occur. In the present paper the Finite Element Reliability Method (FERM) is employed for obta......For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in maintenance and repair actions. Further, a reduction of the load-bearing capacity can occur. In the present paper the Finite Element Reliability Method (FERM) is employed...

  20. Investigation on Thermal Contact Conductance Based on Data Analysis Method of Reliability

    WANG Zongren; YANG Jun; YANG Mingyuan; ZHANG Weifang


    The method of reliability is proposed for the investigation of thermal contact conductance (TCC) in this study.A new definition is introduced,namely reliability thermal contact conductance (RTCC),which is defined as the TCC value that meets the reliability design requirement of the structural materials under consideration.An experimental apparatus with the compensation heater to test the TCC is introduced here.A practical engineering example is utilized to demonstrate the applicability of the proposed approach.By using a statistical regression model along with experimental data obtained from the interfaces of the structural materials GH4169 and K417 used in aero-engine,the estimate values and the confidence level of TCC and RTCC values are studied and compared.The results show that the testing values of TCC increase with interface pressure and the proposed RTCC model matches the test results better at high interface pressure.