Sample records for reliability analysis workstation

  1. Kinematic analysis of post office employees' workstations.

    Draicchio, Francesco; Silvetti, Alessio; Forzano, Federico; Iavicoli, Sergio; Ranavolo, Alberto


    This study analyzed a post office clerk's tasks, comparing two workstation models. The clerk was facing the client in one, and seated at 45 degrees to the counter in the other. We analyzed the most frequent tasks and those presenting the most critical points: 1) payment of a postal order; 2) accepting a registered letter, breaking them down into subtasks. We used an optoelectronic system for kinematic analysis, and calculated the range of motion of the trunk and arms in the three spatial planes. The 45( position required less torsion of the trunk and head when using the printer, placed to the left of the employee. A larger worktop improved the workstation, leaving more room for equipment and allowing the worker to sit frontally to the monitor. However, this solution involved a shorter distance between the worker and the client with longer extension of the shoulder and elbow and less trunk flexion. These findings suggested a modification in the layout that shortens the distance between the worker and client.

  2. Expanding capabilities of the debris analysis workstation

    Spencer, David B.; Sorge, Marlon E.; Mains, Deanna L.; Shubert, Ann J.; Gerhart, Charlotte M.; Yates, Ken W.; Leake, Michael


    Determining the hazards from debris-generating events is a design and safety consideration for a number of space systems, both currently operating and planned. To meet these and other requirements, the United States Air Force (USAF) Phillips Laboratory (PL) Space Debris Research Program has developed a simulation software package called the Debris Analysis Workstation (DAW). This software provides an analysis capability for assessing a wide variety of debris hazards. DAW integrates several component debris analysis models and data visualization tools into a single analysis platform that meets the needs for Department of Defense space debris analysis, and is both user friendly and modular. This allows for studies to be performed expeditiously by analysts who are not debris experts. The current version of DAW includes models for spacecraft breakup, debris orbital lifetime, collision hazard risk assessment, and collision dispersion, as well as a satellite catalog database manager, a drag inclusive propagator, a graphical user interface, and data visualization routines. Together they provide capabilities to conduct several types of analyses, ranging from range safety assessments to satellite constellation risk assessment. Work is progressing to add new capabilities with the incorporation of additional models and improved designs. The existing tools are in their initial integrated form, but the 'glue' that will ultimately bring them together into an integrated system is an object oriented language layer scheduled to be added soon. Other candidate component models under consideration for incorporation include additional orbital propagators, error estimation routines, other dispersion models, and other breakup models. At present, DAW resides on a SUNR workstation, although future versions could be tailored for other platforms, depending on the need.

  3. JPL multipolarization workstation - Hardware, software and examples of data analysis

    Burnette, Fred; Norikane, Lynne


    A low-cost stand-alone interactive image processing workstation has been developed for operations on multipolarization JPL aircraft SAR data, as well as data from future spaceborne imaging radars. A recently developed data compression technique is used to reduce the data volume to 10 Mbytes, for a typical data set, so that interactive analysis may be accomplished in a timely and efficient manner on a supermicrocomputer. In addition to presenting a hardware description of the work station, attention is given to the software that has been developed. Three illustrative examples of data analysis are presented.

  4. Zoning and workstation analysis in interventional cardiology; Zonage et etude de poste en cardiologie interventionnelle

    Degrange, J.P. [RP-Consult, 42 rue Pouchet, 75017 Paris (France)


    As interventional cardiology can induce high doses not only for patients but also for the personnel, the delimitation of regulated areas (or zoning) and workstation analysis (dosimetry) are very important in terms of radioprotection. This paper briefly recalls methods and tools for the different steps to perform zoning and workstation analysis. It outlines the peculiarities of interventional cardiology, presents methods and tools adapted to interventional cardiology, and then discusses the same issues but for workstation analysis. It also outlines specific problems which can be met, and their possible adapted solutions

  5. Implementation and evaluation of an interactive user interface for a clinical image analysis workstation

    Ratib, Osman M.; Huang, H. K.


    Recent developments in digital imaging and Picture Archiving and Communication Systems (PACS) allow physicians and radiologists to assess radiographic images directly in digital form through imaging workstations. The development of medical workstations was primarily oriented toward the development of a convenient tool for rapid display of images. In this project our goal was to design and evaluate a personal desktop workstation that will provide a large number of clinically useful image analysis tools. The hardware used is a standard Macintosh II interfaced to our existing PACS network through an Ethernet interface using standard TCP/IP communication protocols. Special emphasis was placed on the design of the user interface to allow clinicians with minimal or no computer manipulation skills to use complex analysis tools.

  6. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Ratib, Osman M.; Huang, H. K.


    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  7. Power electronics reliability analysis.

    Smith, Mark A.; Atcitty, Stanley


    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  8. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    Ratib, Osman M.; Huang, H. K.


    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  9. VMware workstation

    van Vugt, Sander


    This book is a practical, step-by-step guide to creating and managing virtual machines using VMware Workstation.VMware Workstation: No Experience Necessary is for developers as well as system administrators who want to efficiently set up a test environment .You should have basic networking knowledge, and prior experience with Virtual Machines and VMware Player would be beneficial

  10. Multidisciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  11. System Reliability Analysis: Foundations.


    performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash

  12. ATLAS reliability analysis

    Bartsch, R.R.


    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  13. Design and analysis of wudu’ (ablution) workstation for elderly in Malaysia

    Aman, A.; Dawal, S. Z. M.; Rahman, N. I. A.


    Wudu’ (Ablution) workstation is one of the facilities used by most Muslims in all categories. At present, there are numbers of design guidelines for praying facilities but still lacking on wudu’ (ablution) area specification especially or elderly. Thus, It is timely to develop an ergonomic wudu’ workstation for elderly to perform ablution independently and confidently. This study was conducted to design an ergonomic ablution unit for the Muslim’s elderly in Malaysia. An ablution workstation was designed based on elderly anthropometric dimensions and was then analyse using CATIA V5R21 for posture investigation using RULAs. The results of the study has identified significant anthropometric dimensions in designing wudu’ (ablution) workstation for elderly people. This study can be considered as preliminary study for the development of an ergonomic ablution design for elderly. This effort will become one of the significant social contributions to our elderly population in developing our nation holistically.

  14. Airline Operation Center Workstation

    Department of Transportation — The Airline Operation Center Workstation (AOC Workstation) represents equipment available to users of the National Airspace system, outside of the FAA, that enables...

  15. Analysis on the influence of supply method on a workstation with the help of dynamic simulation

    Gavriluță Alin


    Full Text Available Considering the need of flexibility in any manufacturing process, the choice of the supply method of an assembly workstation can be a decision with instead influence on its performances. Using dynamic simulation, this article wants to compare the effect on a workstation cycle time of three different supply methods: supply on stock, supply in “Strike Zone” and synchronous supply. This study is part of an extended work that has the aim of compering by 3D layout design and dynamic simulation, different supply methods on an assembly line performances.

  16. Next-Generation Telemetry Workstation


    A next-generation telemetry workstation has been developed to replace the one currently used to test and control Range Safety systems. Improving upon the performance of the original system, the new telemetry workstation uses dual-channel telemetry boards for better synchronization of the two uplink telemetry streams. The new workstation also includes an Interrange Instrumentation Group/Global Positioning System (IRIG/GPS) time code receiver board for independent, local time stamping of return-link data. The next-generation system will also record and play back return-link data for postlaunch analysis.

  17. Test-retest reliability and concurrent validity of a web-based questionnaire measuring workstation and individual correlates of work postures during computer work

    IJmker, S.; Mikkers, J.; Blatter, B.M.; Beek, A.J. van der; Mechelen, W. van; Bongers, P.M.


    Introduction: "Ergonomic" questionnaires are widely used in epidemiological field studies to study the association between workstation characteristics, work posture and musculoskeletal disorders among office workers. Findings have been inconsistent regarding the putative adverse effect of work

  18. Test-retest reliability and concurrent validity of a web-based questionnaire measuring workstation and individual correlates of work postures during computer work

    IJmker, S.; Mikkers, J.; Blatter, B.M.; Beek, A.J. van der; Mechelen, W. van; Bongers, P.M.


    Introduction: "Ergonomic" questionnaires are widely used in epidemiological field studies to study the association between workstation characteristics, work posture and musculoskeletal disorders among office workers. Findings have been inconsistent regarding the putative adverse effect of work postu

  19. Reliability Analysis of Wind Turbines

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard


    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  20. Reliability analysis in intelligent machines

    Mcinroy, John E.; Saridis, George N.


    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  1. Hybrid reliability model for fatigue reliability analysis of steel bridges

    曹珊珊; 雷俊卿


    A kind of hybrid reliability model is presented to solve the fatigue reliability problems of steel bridges. The cumulative damage model is one kind of the models used in fatigue reliability analysis. The parameter characteristics of the model can be described as probabilistic and interval. The two-stage hybrid reliability model is given with a theoretical foundation and a solving algorithm to solve the hybrid reliability problems. The theoretical foundation is established by the consistency relationships of interval reliability model and probability reliability model with normally distributed variables in theory. The solving process is combined with the definition of interval reliability index and the probabilistic algorithm. With the consideration of the parameter characteristics of theS−N curve, the cumulative damage model with hybrid variables is given based on the standards from different countries. Lastly, a case of steel structure in the Neville Island Bridge is analyzed to verify the applicability of the hybrid reliability model in fatigue reliability analysis based on the AASHTO.

  2. Sensitivity Analysis of Component Reliability



    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  3. Reliability Analysis of Sensor Networks

    JIN Yan; YANG Xiao-zong; WANG Ling


    To Integrate the capacity of sensing, communication, computing, and actuating, one of the compelling technological advances of these years has been the appearance of distributed wireless sensor network (DSN) for information gathering tasks. In order to save the energy, multi-hop routing between the sensor nodes and the sink node is necessary because of limited resource. In addition, the unpredictable conditional factors make the sensor nodes unreliable. In this paper, the reliability of routing designed for sensor network and some dependability issues of DSN, such as MTTF(mean time to failure) and the probability of connectivity between the sensor nodes and the sink node are analyzed.Unfortunately, we could not obtain the accurate result for the arbitrary network topology, which is # P-hard problem.And the reliability analysis of restricted topologies clustering-based is given. The method proposed in this paper will show us a constructive idea about how to place energyconstrained sensor nodes in the network efficiently from the prospective of reliability.

  4. Challenges in Developing Clinical Workstation

    Narayanan, Venkatesh; Vedula, Venumadhav


    Over the years, medical imaging has become very common and data intensive. New technology is needed to help visualize and analyze these large, complex data sets, especially in an acute care situation where time is of the essence. Also it is very important to present the data in an efficient and simple manner to aid the clinical decision making processes. There is a need for a clinical workstation that handles data from different modalities and performs the necessary post- processing operations on the data in order to enhance the image quality and improve the reliability of diagnosis. This paper briefly explains clinical workstation, emphasizing the requirements and challenges in design and architecture for the development of such systems.

  5. Desk-based workers' perspectives on using sit-stand workstations: a qualitative analysis of the Stand@Work study.

    Chau, Josephine Y; Daley, Michelle; Srinivasan, Anu; Dunn, Scott; Bauman, Adrian E; van der Ploeg, Hidde P


    Prolonged sitting time has been identified as a health risk factor. Sit-stand workstations allow desk workers to alternate between sitting and standing throughout the working day, but not much is known about their acceptability and feasibility. Hence, the aim of this study was to qualitatively evaluate the acceptability, feasibility and perceptions of using sit-stand workstations in a group of desk-based office workers. This article describes the qualitative evaluation of the randomized controlled cross-over Stand@Work pilot trial. Participants were adult employees recruited from a non-government health agency in Sydney, Australia. The intervention involved using an Ergotron Workfit S sit-stand workstation for four weeks. After the four week intervention, participants shared their perceptions and experiences of using the sit-stand workstation in focus group interviews with 4-5 participants. Topics covered in the focus groups included patterns of workstation use, barriers and facilitators to standing while working, effects on work performance, physical impacts, and feasibility in the office. Focus group field notes and transcripts were analysed in an iterative process during and after the data collection period to identify the main concepts and themes. During nine 45-min focus groups, a total of 42 participants were interviewed. Participants were largely intrinsically motivated to try the sit-stand workstation, mostly because of curiosity to try something new, interest in potential health benefits, and the relevance to the participant's own and organisation's work. Most participants used the sit-stand workstation and three common usage patterns were identified: task-based routine, time-based routine, and no particular routine. Common barriers to sit-stand workstation use were working in an open plan office, and issues with sit-stand workstation design. Common facilitators of sit-stand workstation use were a supportive work environment conducive to standing

  6. Creep-rupture reliability analysis

    Peralta-Duran, A.; Wirsching, P. H.


    A probabilistic approach to the correlation and extrapolation of creep-rupture data is presented. Time temperature parameters (TTP) are used to correlate the data, and an analytical expression for the master curve is developed. The expression provides a simple model for the statistical distribution of strength and fits neatly into a probabilistic design format. The analysis focuses on the Larson-Miller and on the Manson-Haferd parameters, but it can be applied to any of the TTP's. A method is developed for evaluating material dependent constants for TTP's. It is shown that optimized constants can provide a significant improvement in the correlation of the data, thereby reducing modelling error. Attempts were made to quantify the performance of the proposed method in predicting long term behavior. Uncertainty in predicting long term behavior from short term tests was derived for several sets of data. Examples are presented which illustrate the theory and demonstrate the application of state of the art reliability methods to the design of components under creep.

  7. On Bayesian System Reliability Analysis

    Soerensen Ringi, M.


    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  8. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  9. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  10. Combination of structural reliability and interval analysis

    Zhiping Qiu; Di Yang; saac Elishakoff


    In engineering applications,probabilistic reliability theory appears to be presently the most important method,however,in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs.In this paper,we developed a hybrid of probabilistic and non-probabilistic reliability theory,which describes the structural uncertain parameters as interval variables when statistical data are found insufficient.By using the interval analysis,a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper,and the traditional probabilistic theory is incorporated with the interval analysis.Moreover,the new method preserves the useful part of the traditional probabilistic reliability theory,but removes the restriction of its strict requirement on data acquisition.Example is presented to demonstrate the feasibility and validity of the proposed theory.

  11. Integrated Methodology for Software Reliability Analysis

    Marian Pompiliu CRISTESCU


    Full Text Available The most used techniques to ensure safety and reliability of the systems are applied together as a whole, and in most cases, the software components are usually overlooked or to little analyzed. The present paper describes the applicability of fault trees analysis software system, analysis defined as Software Fault Tree Analysis (SFTA, fault trees are evaluated using binary decision diagrams, all of these being integrated and used with help from Java library reliability.

  12. Reliability Sensitivity Analysis for Location Scale Family

    洪东跑; 张海瑞


    Many products always operate under various complex environment conditions. To describe the dynamic influence of environment factors on their reliability, a method of reliability sensitivity analysis is proposed. In this method, the location parameter is assumed as a function of relevant environment variables while the scale parameter is assumed as an un- known positive constant. Then, the location parameter function is constructed by using the method of radial basis function. Using the varied environment test data, the log-likelihood function is transformed to a generalized linear expression by de- scribing the indicator as Poisson variable. With the generalized linear model, the maximum likelihood estimations of the model coefficients are obtained. With the reliability model, the reliability sensitivity is obtained. An instance analysis shows that the method is feasible to analyze the dynamic variety characters of reliability along with environment factors and is straightforward for engineering application.

  13. Space Mission Human Reliability Analysis (HRA) Project

    National Aeronautics and Space Administration — The purpose of this project is to extend current ground-based Human Reliability Analysis (HRA) techniques to a long-duration, space-based tool to more effectively...

  14. Production Facility System Reliability Analysis Report

    Dale, Crystal Buchanan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This document describes the reliability, maintainability, and availability (RMA) modeling of the Los Alamos National Laboratory (LANL) design for the Closed Loop Helium Cooling System (CLHCS) planned for the NorthStar accelerator-based 99Mo production facility. The current analysis incorporates a conceptual helium recovery system, beam diagnostics, and prototype control system into the reliability analysis. The results from the 1000 hr blower test are addressed.

  15. Computational Controls Workstation: Algorithms and hardware

    Venugopal, R.; Kumar, M.


    The Computational Controls Workstation provides an integrated environment for the modeling, simulation, and analysis of Space Station dynamics and control. Using highly efficient computational algorithms combined with a fast parallel processing architecture, the workstation makes real-time simulation of flexible body models of the Space Station possible. A consistent, user-friendly interface and state-of-the-art post-processing options are combined with powerful analysis tools and model databases to provide users with a complete environment for Space Station dynamics and control analysis. The software tools available include a solid modeler, graphical data entry tool, O(n) algorithm-based multi-flexible body simulation, and 2D/3D post-processors. This paper describes the architecture of the workstation while a companion paper describes performance and user perspectives.

  16. Structural reliability analysis and reliability-based design optimization: Recent advances

    Qiu, ZhiPing; Huang, Ren; Wang, XiaoJun; Qi, WuChao


    We review recent research activities on structural reliability analysis, reliability-based design optimization (RBDO) and applications in complex engineering structural design. Several novel uncertainty propagation methods and reliability models, which are the basis of the reliability assessment, are given. In addition, recent developments on reliability evaluation and sensitivity analysis are highlighted as well as implementation strategies for RBDO.

  17. FAST Workstation Project Overview


    Space: Ar Impediment to Evolvability," in Proceedings of the National Conference on Artificial Intellingence , 1986. [Szekely 87] P.A. Szekely...There is also the possibility that this effort could spark a much wider effort in the country to effectively utilize computers in commerce and...information needed to decide about combining, it would be desirable for the Workstation to notice when the effects of doing so raise issues with

  18. Multi-Disciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  19. Reliability Analysis of DOOF for Weibull Distribution

    陈文华; 崔杰; 樊小燕; 卢献彪; 相平


    Hierarchical Bayesian method for estimating the failure probability under DOOF by taking the quasi-Beta distribution as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connector as an example, the correctness of the above method through statistical analysis of electrical connector accelerated life test data was verified.

  20. Reliability analysis of flood defence systems

    Steenbergen, H.M.G.M.; Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for the reliability analysis of flood defence systems has been under development. This paper describes the global data requirements for the application and the setup of the models. The analysis generates the probability of system failure and the contribution of ea

  1. Reliability Analysis of High Rockfill Dam Stability

    Ping Yi


    Full Text Available A program 3DSTAB combining slope stability analysis and reliability analysis is developed and validated. In this program, the limit equilibrium method is utilized to calculate safety factors of critical slip surfaces. The first-order reliability method is used to compute reliability indexes corresponding to critical probabilistic surfaces. When derivatives of the performance function are calculated by finite difference method, the previous iteration’s critical slip surface is saved and used. This sequential approximation strategy notably improves efficiency. Using this program, the stability reliability analyses of concrete faced rockfill dams and earth core rockfill dams with different heights and different slope ratios are performed. The results show that both safety factors and reliability indexes decrease as the dam’s slope increases at a constant height and as the dam’s height increases at a constant slope. They decrease dramatically as the dam height increases from 100 m to 200 m while they decrease slowly once the dam height exceeds 250 m, which deserves attention. Additionally, both safety factors and reliability indexes of the upstream slope of earth core rockfill dams are higher than that of the downstream slope. Thus, the downstream slope stability is the key failure mode for earth core rockfill dams.



    performance of any structural system be eva ... by the Joint crete slabs, bending, shear, deflection, reliability, design codes. ement such as ... could be sensitive to this distribution. Table 1: ..... Ang, A. H-S and Tang, W. H. Probability Concepts in.

  3. Culture Representation in Human Reliability Analysis

    David Gertman; Julie Marble; Steven Novack


    Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991) cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.

  4. Workstation software framework

    Andolfato, L.; Karban, R.


    The Workstation Software Framework (WSF) is a state machine model driven development toolkit designed to generate event driven applications based on ESO VLT software. State machine models are used to generate executables. The toolkit provides versatile code generation options and it supports Mealy, Moore and hierarchical state machines. Generated code is readable and maintainable since it combines well known design patterns such as the State and the Template patterns. WSF promotes a development process that is based on model reusability through the creation of a catalog of state machine patterns.

  5. Reliability Analysis of a Steel Frame

    M. Sýkora


    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  6. Event/Time/Availability/Reliability-Analysis Program

    Viterna, L. A.; Hoffman, D. J.; Carr, Thomas


    ETARA is interactive, menu-driven program that performs simulations for analysis of reliability, availability, and maintainability. Written to evaluate performance of electrical power system of Space Station Freedom, but methodology and software applied to any system represented by block diagram. Program written in IBM APL.

  7. Reliability analysis of DOOF for Weibull distribution

    陈文华; 崔杰; 樊晓燕; 卢献彪; 相平


    Hierarchical Bayesian method for estimating the failure probability Pi under DOOF by taking the quasi-Beta distribution B(pi-1 , 1,1, b ) as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connectoras an example, the correctness of the above method through statistical analysis of electrical connector acceler-ated life test data was verified.


    LI Hong-shuang; L(U) Zhen-zhou; YUE Zhu-feng


    Support vector machine (SVM) was introduced to analyze the reliability of the implicit performance function, which is difficult to implement by the classical methods such as the first order reliability method (FORM) and the Monte Carlo simulation (MCS). As a classification method where the underlying structural risk minimization inference rule is employed, SVM possesses excellent learning capacity with a small amount of information and good capability of generalization over the complete data. Hence,two approaches, i.e., SVM-based FORM and SVM-based MCS, were presented for the structural reliability analysis of the implicit limit state function. Compared to the conventional response surface method (RSM) and the artificial neural network (ANN), which are widely used to replace the implicit state function for alleviating the computation cost,the more important advantages of SVM are that it can approximate the implicit function with higher precision and better generalization under the small amount of information and avoid the "curse of dimensionality". The SVM-based reliability approaches can approximate the actual performance function over the complete sampling data with the decreased number of the implicit performance function analysis (usually finite element analysis), and the computational precision can satisfy the engineering requirement, which are demonstrated by illustrations.

  9. Commodity clusters: Performance comparison between PC`s and workstations

    Carter, R.; Laroco, J.; Armstrong, R.


    Workstation clusters were originally developed as a way to leverage the better cost basis of UNIX workstations to perform computations previously handled only by relatively more expensive supercomputers. Commodity workstation clusters take this evolutionary process one step further by replacing equivalent proprietary workstation functionality with less expensive PC technology. As PC technology encroaches on proprietary UNIX workstation vendor markets, these vendors will see a declining share of the overall market. As technology advances continue, the ability to upgrade a workstations performance plays a large role in cost analysis. For example, a major upgrade to a typical UNIX workstation means replacing the whole machine. As major revisions to the UNIX vendor`s product line come out, brand new systems are introduced. IBM compatibles, however, are modular by design, and nothing need to be replaced except the components that are truly improved. The DAISy cluster, for example, is about to undergo a major upgrade from 90MHz Pentiums to 200MHz Pentium Pros. All of the memory -- the system`s largest expense -- and disks, power supply, etc., can be reused. As a result, commodity workstation clusters ought to gain an increasingly large share of the distributed computing market.

  10. Human reliability analysis of control room operators

    Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)


    Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)

  11. Reliability Analysis of Elasto-Plastic Structures


    . Failure of this type of system is defined either as formation of a mechanism or by failure of a prescribed number of elements. In the first case failure is independent of the order in which the elements fail, but this is not so by the second definition. The reliability analysis consists of two parts...... are described and the two definitions of failure can be used by the first formulation, but only the failure definition based on formation of a mechanism by the second formulation. The second part of the reliability analysis is an estimate of the failure probability for the structure on the basis...... are obtained if the failure mechanisms are used. Lower bounds can be calculated on the basis of series systems where the elements are the non-failed elements in a non-failed structure (see Augusti & Baratta [3])....

  12. Bridging Resilience Engineering and Human Reliability Analysis

    Ronald L. Boring


    There has been strong interest in the new and emerging field called resilience engineering. This field has been quick to align itself with many existing safety disciplines, but it has also distanced itself from the field of human reliability analysis. To date, the discussion has been somewhat one-sided, with much discussion about the new insights afforded by resilience engineering. This paper presents an attempt to address resilience engineering from the perspective of human reliability analysis (HRA). It is argued that HRA shares much in common with resilience engineering and that, in fact, it can help strengthen nascent ideas in resilience engineering. This paper seeks to clarify and ultimately refute the arguments that have served to divide HRA and resilience engineering.

  13. Reliability analysis of wastewater treatment plants.

    Oliveira, Sílvia C; Von Sperling, Marcos


    This article presents a reliability analysis of 166 full-scale wastewater treatment plants operating in Brazil. Six different processes have been investigated, comprising septic tank+anaerobic filter, facultative pond, anaerobic pond+facultative pond, activated sludge, upflow anaerobic sludge blanket (UASB) reactors alone and UASB reactors followed by post-treatment. A methodology developed by Niku et al. [1979. Performance of activated sludge process and reliability-based design. J. Water Pollut. Control Assoc., 51(12), 2841-2857] is used for determining the coefficients of reliability (COR), in terms of the compliance of effluent biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), total nitrogen (TN), total phosphorus (TP) and fecal or thermotolerant coliforms (FC) with discharge standards. The design concentrations necessary to meet the prevailing discharge standards and the expected compliance percentages have been calculated from the COR obtained. The results showed that few plants, under the observed operating conditions, would be able to present reliable performances considering the compliance with the analyzed standards. The article also discusses the importance of understanding the lognormal behavior of the data in setting up discharge standards, in interpreting monitoring results and compliance with the legislation.

  14. Representative Sampling for reliable data analysis

    Petersen, Lars; Esbensen, Kim Harry


    regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...... analysis (“data” do not exist in isolation of their provenance). The Total Sampling Error (TSE) is by far the dominating contribution to all analytical endeavours, often 100+ times larger than the Total Analytical Error (TAE).We present a summarizing set of only seven Sampling Unit Operations (SUOs...

  15. The quantitative failure of human reliability analysis

    Bennett, C.T.


    This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.

  16. Reliability of photographic posture analysis of adolescents.

    Hazar, Zeynep; Karabicak, Gul Oznur; Tiftikci, Ugur


    [Purpose] Postural problems of adolescents needs to be evaluated accurately because they may lead to greater problems in the musculoskeletal system as they develop. Although photographic posture analysis has been frequently used, more simple and accessible methods are still needed. The purpose of this study was to investigate the inter- and intra-rater reliability of photographic posture analysis using MB-ruler software. [Subjects and Methods] Subjects were 30 adolescents (15 girls and 15 boys, mean age: 16.4±0.4 years, mean height 166.3±6.7 cm, mean weight 63.8±15.1 kg) and photographs of their habitual standing posture photographs were taken in the sagittal plane. For the evaluation of postural angles, reflective markers were placed on anatomical landmarks. For angular measurements, MB-ruler (Markus Bader- MB Software Solutions, triangular screen ruler) was used. Photographic evaluations were performed by two observers with a repetition after a week. Test-retest and inter-rater reliability evaluations were calculated using intra-class correlation coefficients (ICC). [Results] Inter-rater (ICC>0.972) and test-retest (ICC>0.774) reliability were found to be in the range of acceptable to excellent. [Conclusion] Reference angles for postural evaluation were found to be reliable and repeatable. The present method was found to be an easy and non-invasive method and it may be utilized by researchers who are in search of an alternative method for photographic postural assessments.

  17. The Impact of Ergonomically Designed Workstations on Shoulder EMG Activity during Carpet Weaving

    Majid Motamedzade


    Full Text Available Background: The present study aimed to evaluate the biomechanical exposure to the trapezius muscle activity in female weavers for a prolonged period in the workstation A (suggested by previous studies and workstation B (proposed by the present study. Methods: Electromyography data were collected from nine females during four hours for each ergonomically designed workstation at the Ergonomics Laboratory, Hamadan, Iran. The design criteria for ergonomically designed workstations were: 1 weaving height (20 and 3 cm above elbow height for workstations A and B, respectively, and 2 seat type (10° and 0° forwardsloping seat for workstations A and B, respectively. Results: The amplitude probability distribution function (APDF analysis showed that the left and right upper trapezius muscle activity was almost similar at each workstation. Trapezius muscle activity in the workstation A was significantly greater than workstations B (P<0.001. Conclusion: In general, use of workstation B leads to significantly reduced muscle activity levels in the upper trapezius as compared to workstation A in weavers. Despite the positive impact of workstation B in reducing trapezius muscle activity, it seems that constrained postures of the upper arm during weaving may be associated with musculoskeletal symptoms.

  18. Computer Workstation: Pointer/Mouse

    ... Safety and Health Program Recommendations It's the Law Poster REGULATIONS Law and Regulations Standard Interpretations Training Requirements ... when evaluating your computer workstation. Pointer Placement Pointer Size, Shape, and Settings Pointer/Mouse Quick Tips Keep ...

  19. Representative Sampling for reliable data analysis

    Petersen, Lars; Esbensen, Kim Harry


    The Theory of Sampling (TOS) provides a description of all errors involved in sampling of heterogeneous materials as well as all necessary tools for their evaluation, elimination and/or minimization. This tutorial elaborates on—and illustrates—selected central aspects of TOS. The theoretical...... regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...

  20. Reliability Analysis of Adhesive Bonded Scarf Joints

    Kimiaeifar, Amin; Toft, Henrik Stensgaard; Lund, Erik;


    A probabilistic model for the reliability analysis of adhesive bonded scarfed lap joints subjected to static loading is developed. It is representative for the main laminate in a wind turbine blade subjected to flapwise bending. The structural analysis is based on a three dimensional (3D) finite...... the FEA model, and a sensitivity analysis on the influence of various geometrical parameters and material properties on the maximum stress is conducted. Because the yield behavior of many polymeric structural adhesives is dependent on both deviatoric and hydrostatic stress components, different ratios...... of the compressive to tensile adhesive yield stresses in the failure criterion are considered. It is shown that the chosen failure criterion, the scarf angle and the load are significant for the assessment of the probability of failure....


    Bowerman, P. N.


    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for

  2. Integrated Reliability and Risk Analysis System (IRRAS)

    Russell, K D; McKay, M K; Sattison, M.B. Skinner, N.L.; Wood, S T [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rasmuson, D M [Nuclear Regulatory Commission, Washington, DC (United States)


    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance.

  3. Advancing Usability Evaluation through Human Reliability Analysis

    Ronald L. Boring; David I. Gertman


    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.

  4. Reliability Analysis of Tubular Joints in Offshore Structures

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard


    Reliability analysis of single tubular joints and offshore platforms with tubular joints is" presented. The failure modes considered are yielding, punching, buckling and fatigue failure. Element reliability as well as systems reliability approaches are used and illustrated by several examples....... Finally, optimal design of tubular.joints with reliability constraints is discussed and illustrated by an example....

  5. Software Architecture Reliability Analysis using Failure Scenarios

    Tekinerdogan, B.; Sözer, Hasan; Aksit, Mehmet

    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components


    Ronald L. Boring; David I. Gertman; Katya Le Blanc


    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  7. Human Reliability Analysis for Small Modular Reactors

    Ronald L. Boring; David I. Gertman


    Because no human reliability analysis (HRA) method was specifically developed for small modular reactors (SMRs), the application of any current HRA method to SMRs represents tradeoffs. A first- generation HRA method like THERP provides clearly defined activity types, but these activity types do not map to the human-system interface or concept of operations confronting SMR operators. A second- generation HRA method like ATHEANA is flexible enough to be used for SMR applications, but there is currently insufficient guidance for the analyst, requiring considerably more first-of-a-kind analyses and extensive SMR expertise in order to complete a quality HRA. Although no current HRA method is optimized to SMRs, it is possible to use existing HRA methods to identify errors, incorporate them as human failure events in the probabilistic risk assessment (PRA), and quantify them. In this paper, we provided preliminary guidance to assist the human reliability analyst and reviewer in understanding how to apply current HRA methods to the domain of SMRs. While it is possible to perform a satisfactory HRA using existing HRA methods, ultimately it is desirable to formally incorporate SMR considerations into the methods. This may require the development of new HRA methods. More practicably, existing methods need to be adapted to incorporate SMRs. Such adaptations may take the form of guidance on the complex mapping between conventional light water reactors and small modular reactors. While many behaviors and activities are shared between current plants and SMRs, the methods must adapt if they are to perform a valid and accurate analysis of plant personnel performance in SMRs.

  8. Compartmented mode workstation (CMW) comparisons

    Tolliver, J.S.


    As the Compartmented Mode Workstation (CMW) market has matured, several vendors have released new versions of their CMW operating systems. These include a new version from SecureWare (CMW + Version 2.4), and Sun`s CMW 1.1 (also known as Trusted Solaris 1.1). EC is now shipping MLS+ 3.0 for DEC Alpha platforms. Relatively new entries in the market include Loral B1/CMW for IBM RS/6000 platforms and a SecureWare-based CMW for HP platforms (HP-UX 10.09). With all these choices it is time for a comparative analysis of the features offered by the various vendors. The authors have three of the above five CMW systems plus HP-UX BLS 9.09, which is a multilevel secure operating system (OS) targeted at the B1 level but not a CMW. Each is unique in sometimes obvious, sometimes subtle ways, a situation that requires knowing and keeping straight a variety of commands to do the same thing on each system. Some vendors offer extensive GUI tools for system administration; some require entering command-line commands for certain system administration tasks. They examine the differences in system installation, system administration, and system operating among the systems. They look at trusted networking among the various systems and differences in the network databases and label encodings files. They examine the user interface on the various systems from logging in to logging out.

  9. [Qualitative analysis: theory, steps and reliability].

    Minayo, Maria Cecília de Souza


    This essay seeks to conduct in-depth analysis of qualitative research, based on benchmark authors and the author's own experience. The hypothesis is that in order for an analysis to be considered reliable, it needs to be based on structuring terms of qualitative research, namely the verbs 'comprehend' and 'interpret', and the nouns 'experience', 'common sense' and 'social action'. The 10 steps begin with the construction of the scientific object by its inclusion on the national and international agenda; the development of tools that make the theoretical concepts tangible; conducting field work that involves the researcher empathetically with the participants in the use of various techniques and approaches, making it possible to build relationships, observations and a narrative with perspective. Finally, the author deals with the analysis proper, showing how the object, which has already been studied in all the previous steps, should become a second-order construct, in which the logic of the actors in their diversity and not merely their speech predominates. The final report must be a theoretic, contextual, concise and clear narrative.

  10. Task Decomposition in Human Reliability Analysis

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory


    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.

  11. The Modern Integrated Anaesthesia Workstation

    Patil, Vijaya P; Shetmahajan, Madhavi G; Divatia, Jigeeshu V


    Over the years, the conventional anaesthesia machine has evolved into an advanced carestation. The new machines use advanced electronics, software and technology to offer extensive capabilities for ventilation, monitoring, inhaled agent delivery, low-flow anaesthesia and closed-loop anaesthesia. They offer integrated monitoring and recording facilities and seamless integration with anaesthesia information systems. It is possible to deliver tidal volumes accurately and eliminate several hazards associated with the low pressure system and oxygen flush. Appropriate use can result in enhanced safety and ergonomy of anaesthetic delivery and monitoring. However, these workstations have brought in a new set of limitations and potential drawbacks. There are differences in technology and operational principles amongst the new workstations. Understand the principles of operation of these workstations and have a thorough knowledge of the operating manual of the individual machines. PMID:24249877

  12. The modern integrated anaesthesia workstation

    Vijaya P Patil


    Full Text Available Over the years, the conventional anaesthesia machine has evolved into an advanced carestation. The new machines use advanced electronics, software and technology to offer extensive capabilities for ventilation, monitoring, inhaled agent delivery, low-flow anaesthesia and closed-loop anaesthesia. They offer integrated monitoring and recording facilities and seamless integration with anaesthesia information systems. It is possible to deliver tidal volumes accurately and eliminate several hazards associated with the low pressure system and oxygen flush. Appropriate use can result in enhanced safety and ergonomy of anaesthetic delivery and monitoring. However, these workstations have brought in a new set of limitations and potential drawbacks. There are differences in technology and operational principles amongst the new workstations. Understand the principles of operation of these workstations and have a thorough knowledge of the operating manual of the individual machines.

  13. Reliability analysis of an associated system

    陈长杰; 魏一鸣; 蔡嗣经


    Based on engineering reliability of large complex system and distinct characteristic of soft system, some new conception and theory on the medium elements and the associated system are created. At the same time, the reliability logic model of associated system is provided. In this paper, through the field investigation of the trial operation, the engineering reliability of the paste fill system in No.2 mine of Jinchuan Non-ferrous Metallic Corporation is analyzed by using the theory of associated system.

  14. Sensitivity Analysis for the System Reliability Function


    reliabilities. The unique feature of the approach is that stunple data collected on K inde-ndent replications using a specified component reliability % v &:•r...Carlo method. The polynomial time algorithm of Agrawaw Pad Satyanarayana (104) fIr the exact reliability computaton for seres- allel systems exemplifies...consideration. As an example for the s-t connectedness problem, let denote -7- edge-disjoint minimal s-t paths of G and let V , denote edge-disjoint

  15. Workstations studies and radiation protection; Etudes de postes et radioprotection

    Lahaye, T. [Direction des relations du travail, 75 - Paris (France); Donadille, L.; Rehel, J.L.; Paquet, F. [Institut de Radioprotection et de Surete Nucleaire, 92 - Fontenay-aux-Roses (France); Beneli, C. [Paris-5 Univ., 75 (France); Cordoliani, Y.S. [Societe Francaise de Radioprotection, 92 - Fontenay-aux-Roses (France); Vrigneaud, J.M. [Assistance Publique - Hopitaux de Paris, 75 (France); Gauron, C. [Institut National de Recherche et de Securite, 75 - Paris (France); Petrequin, A.; Frison, D. [Association des Medecins du Travail des Salaries du Nucleaire (France); Jeannin, B. [Electricite de France (EDF), 75 - Paris (France); Charles, D. [Polinorsud (France); Carballeda, G. [cabinet Indigo Ergonomie, 33 - Merignac (France); Crouail, P. [Centre d' Etude sur l' Evaluation de la Protection dans le Domaine Nucleaire, 92 - Fontenay-aux-Roses (France); Valot, C. [IMASSA, 91 - Bretigny-sur-Orge (France)


    This day on the workstations studies for the workers follow-up, was organised by the research and health section. Devoted to the company doctors, for the competent persons in radiation protection, for the engineers of safety, it presented examples of methodologies and applications in the medical, industrial domain and the research, so contributing to a better understanding and an application of regulatory measures. The analysis of the workstation has to allow a reduction of the exposures and the risks and lead to the optimization of the medical follow-up. The agenda of this day included the different subjects as follow: evolution of the regulation in matter of demarcation of the regulated zones where the measures of workers protection are strengthened; presentation of the I.R.S.N. guide of help to the realization of a workstation study; implementation of a workstation study: case of radiology; the workstation studies in the research area; Is it necessary to impose the operational dosimetry in the services of radiodiagnostic? The experience feedback of a competent person in radiation protection (P.C.R.) in a hospital environment; radiation protection: elaboration of a good practices guide in medical field; the activities file in nuclear power plant: an evaluation tool of risks for the prevention. Methodological presentation and examples; insulated workstation study; the experience feedback of a provider; Contribution of the ergonomics to the determiners characterization in the ionizing radiation exposure situations;The workstations studies for the internal contamination in the fuel cycle facilities and the consideration of the results in the medical follow-up; R.E.L.I.R. necessity of workstation studies; the consideration of the human factor. (N.C.)

  16. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao; Zhiyang You; Hai Wan


    Mobile ad hoc network (MANET) is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  17. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao


    Full Text Available Mobile ad hoc network (MANET is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  18. Solving reliability analysis problems in the polar space

    Ghasem Ezzati; Musa Mammadov; Siddhivinayak Kulkarni


    An optimization model that is widely used in engineering problems is Reliability-Based Design Optimization (RBDO). Input data of the RBDO is non-deterministic and constraints are probabilistic. The RBDO aims at minimizing cost ensuring that reliability is at least an accepted level. Reliability analysis is an important step in two-level RBDO approaches. Although many methods have been introduced to apply in reliability analysis loop of the RBDO, there are still many drawbacks in their efficie...

  19. Reliability Analysis and Optimal Design of Monolithic Vertical Wall Breakwaters

    Sørensen, John Dalsgaard; Burcharth, Hans F.; Christiani, E.


    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified and relia......Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified...

  20. Reliability in Cross-National Content Analysis.

    Peter, Jochen; Lauf, Edmund


    Investigates how coder characteristics such as language skills, political knowledge, coding experience, and coding certainty affected inter-coder and coder-training reliability. Shows that language skills influenced both reliability types. Suggests that cross-national researchers should pay more attention to cross-national assessments of…

  1. Software architecture reliability analysis using failure scenarios

    Tekinerdogan, Bedir; Sozer, Hasan; Aksit, Mehmet


    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components a

  2. Software reliability experiments data analysis and investigation

    Walker, J. Leslie; Caglayan, Alper K.


    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  3. Reliability Analysis of Slope Stability by Central Point Method

    Li, Chunge; WU Congliang


    Given uncertainty and variability of the slope stability analysis parameter, the paper proceed from the perspective of probability theory and statistics based on the reliability theory. Through the central point method of reliability analysis, performance function about the reliability of slope stability analysis is established. What’s more, the central point method and conventional limit equilibrium methods do comparative analysis by calculation example. The approach’s numerical ...

  4. Individual Differences in Human Reliability Analysis

    Jeffrey C. Joe; Ronald L. Boring


    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research has shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.


    Popescu V.S.


    Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.

  6. Ergonomic Evaluations of Microgravity Workstations

    Whitmore, Mihriban; Berman, Andrea H.; Byerly, Diane


    Various gloveboxes (GBXs) have been used aboard the Shuttle and ISS. Though the overall technical specifications are similar, each GBX's crew interface is unique. JSC conducted a series of ergonomic evaluations of the various glovebox designs to identify human factors requirements for new designs to provide operator commonality across different designs. We conducted 2 0g evaluations aboard the Shuttle to evaluate the material sciences GBX and the General Purpose Workstation (GPWS), and a KC-135 evaluation to compare combinations of arm hole interfaces and foot restraints (flexible arm holes were better than rigid ports for repetitive fine manipulation tasks). Posture analysis revealed that the smallest and tallest subjects assumed similar postures at all four configurations, suggesting that problematic postures are not necessarily a function of the operator s height but a function of the task characteristics. There was concern that the subjects were using the restrictive nature of the GBX s cuffs as an upper-body restraint to achieve such high forces, which might lead to neck/shoulder discomfort. EMG data revealed more consistent muscle performance at the GBX; the variability in the EMG profiles observed at the GPWS was attributed to the subjects attempts to provide more stabilization for themselves in the loose, flexible gauntlets. Tests revealed that the GBX should be designed for a 95 percentile American male to accommodate a neutral working posture. In addition, the foot restraint with knee support appeared beneficial for GBX operations. Crew comments were to provide 2 foot restraint mechanical modes, loose and lock-down, to accommodate a wide range of tasks without egressing the restraint system. Thus far, we have developed preliminary design guidelines for GBXs and foot.

  7. Reliability Analysis on English Writing Test of SHSEE in Shanghai

    黄玉麒; 黄芳


    As a subjective test, the validity of writing test is acceptable. What about the reliability? Writing test occupies a special position in the senior high school entrance examination (SHSEE for short). It is important to ensure its reliability. By the analysis of recent years’English writing items in SHSEE, the author offer suggestions on how to guarantee the reliability of writing tests.

  8. Analysis on Some of Software Reliability Models


    Software reliability & maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper,which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0is supported by seven soft ware reliability models and four software maintainability models. Numerical characteristicsfor all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are alsogiven in the paper.

  9. System reliability analysis for kinematic performance of planar mechanisms

    ZHANG YiMin; HUANG XianZhen; ZHANG XuFang; HE XiangDong; WEN BangChun


    Based on the reliability and mechanism kinematic accuracy theories, we propose a general methodology for system reliability analysis of kinematic performance of planar mechanisms. The loop closure equations are used to estimate the kinematic performance errors of planar mechanisms. Reliability and system reliability theories are introduced to develop the limit state functions (LSF) for failure of kinematic performance qualities. The statistical fourth moment method and the Edgeworth series technique are used on system reliability analysis for kinematic performance of planar mechanisms, which relax the restrictions of probability distribution of design variables. Finally, the practicality, efficiency and accuracy of the proposed method are demonstrated by numerical examples.

  10. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Ronald Laurids Boring


    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  11. Efficient Parallel Engineering Computing on Linux Workstations

    Lou, John Z.


    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  12. Analysis on testing and operational reliability of software

    ZHAO Jing; LIU Hong-wei; CUI Gang; WANG Hui-qiang


    Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.

  13. Reliability estimation in a multilevel confirmatory factor analysis framework.

    Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J


    Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.

  14. Mechanical reliability analysis of tubes intended for hydrocarbons

    Nahal, Mourad; Khelif, Rabia [Badji Mokhtar University, Annaba (Algeria)


    Reliability analysis constitutes an essential phase in any study concerning reliability. Many industrialists evaluate and improve the reliability of their products during the development cycle - from design to startup (design, manufacture, and exploitation) - to develop their knowledge on cost/reliability ratio and to control sources of failure. In this study, we obtain results for hardness, tensile, and hydrostatic tests carried out on steel tubes for transporting hydrocarbons followed by statistical analysis. Results obtained allow us to conduct a reliability study based on resistance request. Thus, index of reliability is calculated and the importance of the variables related to the tube is presented. Reliability-based assessment of residual stress effects is applied to underground pipelines under a roadway, with and without active corrosion. Residual stress has been found to greatly increase probability of failure, especially in the early stages of pipe lifetime.

  15. Analysis of Reliability of CET Band4



    CET Band 4 has been carried out for more than a decade. It becomes so large- scaled, so popular and so influential that many testing experts and foreign language teachers are willing to do research on it. In this paper, I will mainly analyse its reliability from the perspective of writing test and speaking test.

  16. Bypassing BDD Construction for Reliability Analysis

    Williams, Poul Frederick; Nikolskaia, Macha; Rauzy, Antoine


    In this note, we propose a Boolean Expression Diagram (BED)-based algorithm to compute the minimal p-cuts of boolean reliability models such as fault trees. BEDs make it possible to bypass the Binary Decision Diagram (BDD) construction, which is the main cost of fault tree assessment....

  17. Reliability Analysis of an Offshore Structure

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Rackwitz, R.


    A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittie fradure and crack through the tubular roerober walls. The stochastic modeiling is described. The hot spot stress speetral moments as fundion of the stochastic...

  18. Java Mission Evaluation Workstation System

    Pettinger, Ross; Watlington, Tim; Ryley, Richard; Harbour, Jeff


    The Java Mission Evaluation Workstation System (JMEWS) is a collection of applications designed to retrieve, display, and analyze both real-time and recorded telemetry data. This software is currently being used by both the Space Shuttle Program (SSP) and the International Space Station (ISS) program. JMEWS was written in the Java programming language to satisfy the requirement of platform independence. An object-oriented design was used to satisfy additional requirements and to make the software easily extendable. By virtue of its platform independence, JMEWS can be used on the UNIX workstations in the Mission Control Center (MCC) and on office computers. JMEWS includes an interactive editor that allows users to easily develop displays that meet their specific needs. The displays can be developed and modified while viewing data. By simply selecting a data source, the user can view real-time, recorded, or test data.

  19. Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis

    Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen


    The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.

  20. Information workstations in clinical pathology.

    Spackman, K A


    Multitasking operating systems and expanding networks now permit smooth access to remote computers, peripherals, data, and information resources. Graphic user interfaces and productivity-enhancing software packages reduce the need for training and memorization of commands. New models of desktop computers based on "data-centered" software architecture can enhance workstation usefulness even more. Pathologists need to consider how these tools might improve access to and management of information and knowledge.

  1. Development of PSA workstation KIRAP

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong


    Advanced Research Group of Korea Atomic Energy Research Institute has been developing the Probabilistic Safety Assessment(PSA) workstation KIRAP from 1992. This report describes the recent development activities of PSA workstation KIRAP. The first is to develop and improve the methodologies for PSA quantification, that are the incorporation of fault tree modularization technique, the improvement of cut set generation method, the development of rule-based recovery, the development of methodology to solve a fault tree which has the logical loops and to handle a fault tree which has several initiators. These methodologies are incorporated in the PSA quantification software KIRAP-CUT. The second is to convert PSA modeling softwares for Windows, which have been used on the DOS environment since 1987. The developed softwares are the fault tree editor KWTREE, the event tree editor CONPAS, and Data manager KWDBMAN for event data and common cause failure (CCF) data. With the development of PSA workstation, it makes PSA modeling and PSA quantification and automation easier and faster. (author). 8 refs.

  2. Reliability analysis of ceramic matrix composite laminates

    Thomas, David J.; Wetherhold, Robert C.


    At a macroscopic level, a composite lamina may be considered as a homogeneous orthotropic solid whose directional strengths are random variables. Incorporation of these random variable strengths into failure models, either interactive or non-interactive, allows for the evaluation of the lamina reliability under a given stress state. Using a non-interactive criterion for demonstration purposes, laminate reliabilities are calculated assuming previously established load sharing rules for the redistribution of load as the failure of laminae occur. The matrix cracking predicted by ACK theory is modeled to allow a loss of stiffness in the fiber direction. The subsequent failure in the fiber direction is controlled by a modified bundle theory. Results using this modified bundle model are compared with previous models which did not permit separate consideration of matrix cracking, as well as to results obtained from experimental data.

  3. DFTCalc: Reliability centered maintenance via fault tree analysis (tool paper)

    Guck, Dennis; Spel, Jip; Stoelinga, Mariëlle Ida Antoinette; Butler, Michael; Conchon, Sylvain; Zaïdi, Fatiha


    Reliability, availability, maintenance and safety (RAMS) analysis is essential in the evaluation of safety critical systems like nuclear power plants and the railway infrastructure. A widely used methodology within RAMS analysis are fault trees, representing failure propagations throughout a system.

  4. DFTCalc: reliability centered maintenance via fault tree analysis (tool paper)

    Guck, Dennis; Spel, Jip; Stoelinga, Mariëlle; Butler, Michael; Conchon, Sylvain; Zaïdi, Fatiha


    Reliability, availability, maintenance and safety (RAMS) analysis is essential in the evaluation of safety critical systems like nuclear power plants and the railway infrastructure. A widely used methodology within RAMS analysis are fault trees, representing failure propagations throughout a system.


    Gaguk Margono


    Full Text Available The purpose of this paper is to compare unidimensional reliability and multidimensional reliability of instrument students’ satisfaction as an internal costumer. Multidimensional reliability measurement is rarely used in the field of research. Multidimensional reliability is estimated by using Confirmatory Factor Analysis (CFA on the Structural Equation Model (SEM. Measurements and calculations are described in this article using instrument students’ satisfaction as an internal costumer. Survey method used in this study and sampling used simple random sampling. This instrument has been tried out to 173 students. The result is concluded that the measuringinstrument of students’ satisfaction as an internal costumer by using multidimensional reliability coefficient has higher accuracy when compared with a unidimensional reliability coefficient. Expected in advanced research used another formula multidimensional reliability, including when using SEM.

  6. Reliability analysis of PLC safety equipment

    Yu, J.; Kim, J. Y. [Chungnam Nat. Univ., Daejeon (Korea, Republic of)


    FMEA analysis for Nuclear Safety Grade PLC, failure rate prediction for nuclear safety grade PLC, sensitivity analysis for components failure rate of nuclear safety grade PLC, unavailability analysis support for nuclear safety system.

  7. Earth slope reliability analysis under seismic loadings using neural network

    PENG Huai-sheng; DENG Jian; GU De-sheng


    A new method was proposed to cope with the earth slope reliability problem under seismic loadings. The algorithm integrates the concepts of artificial neural network, the first order second moment reliability method and the deterministic stability analysis method of earth slope. The performance function and its derivatives in slope stability analysis under seismic loadings were approximated by a trained multi-layer feed-forward neural network with differentiable transfer functions. The statistical moments calculated from the performance function values and the corresponding gradients using neural network were then used in the first order second moment method for the calculation of the reliability index in slope safety analysis. Two earth slope examples were presented for illustrating the applicability of the proposed approach. The new method is effective in slope reliability analysis. And it has potential application to other reliability problems of complicated engineering structure with a considerably large number of random variables.

  8. New trends in radiology workstation design

    Moise, Adrian; Atkins, M. Stella


    In the radiology workstation design, the race for adding more features is now morphing into an iterative user centric design with the focus on ergonomics and usability. The extent of the list of features for the radiology workstation used to be one of the most significant factors for a Picture Archiving and Communication System (PACS) vendor's ability to sell the radiology workstation. Not anymore is now very much the same between the major players in the PACS market. How these features work together distinguishes different radiology workstations. Integration (with the PACS/Radiology Information System (RIS) systems, with the 3D tool, Reporting Tool etc.), usability (user specific preferences, advanced display protocols, smart activation of tools etc.) and efficiency (what is the output a radiologist can generate with the workstation) are now core factors for selecting a workstation. This paper discusses these new trends in radiology workstation design. We demonstrate the importance of the interaction between the PACS vendor (software engineers) and the customer (radiologists) during the radiology workstation design. We focus on iterative aspects of the workstation development, such as the presentation of early prototypes to as many representative users as possible during the software development cycle and present the results of a survey of 8 radiologists on designing a radiology workstation.

  9. Design and Analysis for Reliability of Wireless Sensor Network

    Yongxian Song


    Full Text Available Reliability is an important performance indicator of wireless sensor network, to some application fields, which have high demands in terms of reliability, it is particularly important to ensure reliability of network. At present, the reliability research findings of wireless sensor network are much more at home and abroad, but they mainly improve network reliability from the networks topology, reliable protocol and application layer fault correction and so on, and reliability of network is comprehensive considered from hardware and software aspects is much less. This paper adopts bionic hardware to implement bionic reconfigurable of wireless sensor network nodes, so as to the nodes have able to change their structure and behavior autonomously and dynamically, in the cases of the part hardware are failure, and the nodes can realize bionic self-healing. Secondly, Markov state diagram and probability analysis method are adopted to realize solution of functional model for reliability, establish the relationship between reliability and characteristic parameters for sink nodes, analyze sink nodes reliability model, so as to determine the reasonable parameters of the model and ensure reliability of sink nodes.

  10. Reliability-Analysis of Offshore Structures using Directional Loads

    Sørensen, John Dalsgaard; Bloch, Allan; Sterndorff, M. J.


    Reliability analyses of offshore structures such as steel jacket platforms are usually performed using stochastic models for the wave loads based on the omnidirectional wave height. However, reliability analyses with respect to structural failure modes such as total collapse of a structure...... heights from the central part of the North Sea. It is described how the stochastic model for the directional wave heights can be used in a reliability analysis where total collapse of offshore steel jacket platforms is considered....

  11. Statistical analysis on reliability and serviceability of caterpillar tractor

    WANG Jinwu; LIU Jiafu; XU Zhongxiang


    For further understanding reliability and serviceability of tractor and to furnish scientific and technical theories, based on the promotion and application of it, the following experiments and statistical analysis on reliability (reliability and MTBF) serviceability (service and MTTR) of Donfanghong-1002 and Dongfanghong-802 were conducted. The result showed that the intervals of average troubles of these two tractors were 182.62 h and 160.2 h, respectively, and the weakest assembly of them was engine part.

  12. Modified Bayesian Kriging for Noisy Response Problems for Reliability Analysis


    surrogate model is used to do the MCS prediction for the reliability analysis for the sampling- based reliability-based design optimization ( RBDO ) method...D., Choi, K. K., Noh, Y., & Zhao, L. (2011). Sampling-based stochastic sensitivity analysis using score functions for RBDO problems with correlated...K., and Zhao, L., (2011). Sampling- based RBDO using the stochastic sensitivity analysis and dynamic Kriging method. Structural and

  13. Reliability analysis of large, complex systems using ASSIST

    Johnson, Sally C.


    The SURE reliability analysis program is discussed as well as the ASSIST model generation program. It is found that semi-Markov modeling using model reduction strategies with the ASSIST program can be used to accurately solve problems at least as complex as other reliability analysis tools can solve. Moreover, semi-Markov analysis provides the flexibility needed for modeling realistic fault-tolerant systems.

  14. Evaluating some Reliability Analysis Methodologies in Seismic Design

    A. E. Ghoulbzouri


    Full Text Available Problem statement: Accounting for uncertainties that are present in geometric and material data of reinforced concrete buildings is performed in this study within the context of performance based seismic engineering design. Approach: Reliability of the expected performance state is assessed by using various methodologies based on finite element nonlinear static pushover analysis and specialized reliability software package. Reliability approaches that were considered included full coupling with an external finite element code and surface response based methods in conjunction with either first order reliability method or importance sampling method. Various types of probability distribution functions that model parameters uncertainties were introduced. Results: The probability of failure according to the used reliability analysis method and to the selected distribution of probabilities was obtained. Convergence analysis of the importance sampling method was performed. The required duration of analysis as function of the used reliability method was evaluated. Conclusion/Recommendations: It was found that reliability results are sensitive to the used reliability analysis method and to the selected distribution of probabilities. Durations of analysis for coupling methods were found to be higher than those associated to surface response based methods; one should however include time needed to derive these lasts. For the reinforced concrete building considered in this study, it was found that significant variations exist between all the considered reliability methodologies. The full coupled importance sampling method is recommended, but the first order reliability method applied on a surface response model can be used with good accuracy. Finally, the distributions of probabilities should be carefully identified since giving the mean and the standard deviation were found to be insufficient.

  15. Reliability Distribution of Numerical Control Lathe Based on Correlation Analysis

    Xiaoyan Qi; Guixiang Shen; Yingzhi Zhang; Shuguang Sun; Bingkun Chen


    Combined Reliability distribution with correlation analysis, a new method has been proposed to make Reliability distribution where considering the elements about structure correlation and failure correlation of subsystems. Firstly, we make a sequence for subsystems by means of TOPSIS which comprehends the considerations of Reliability allocation, and introducing a Copula connecting function to set up a distribution model based on structure correlation, failure correlation and target correlation, and then acquiring reliability target area of all subsystems by Matlab. In this method, not only the traditional distribution considerations are concerned, but also correlation influences are involved, to achieve supplementing information and optimizing distribution.

  16. Reliability and safety analysis of redundant vehicle management computer system

    Shi Jian; Meng Yixuan; Wang Shaoping; Bian Mengmeng; Yan Dungong


    Redundant techniques are widely adopted in vehicle management computer (VMC) to ensure that VMC has high reliability and safety. At the same time, it makes VMC have special char-acteristics, e.g., failure correlation, event simultaneity, and failure self-recovery. Accordingly, the reliability and safety analysis to redundant VMC system (RVMCS) becomes more difficult. Aimed at the difficulties in RVMCS reliability modeling, this paper adopts generalized stochastic Petri nets to establish the reliability and safety models of RVMCS. Then this paper analyzes RVMCS oper-ating states and potential threats to flight control system. It is verified by simulation that the reli-ability of VMC is not the product of hardware reliability and software reliability, and the interactions between hardware and software faults can reduce the real reliability of VMC obviously. Furthermore, the failure undetected states and false alarming states inevitably exist in RVMCS due to the influences of limited fault monitoring coverage and false alarming probability of fault mon-itoring devices (FMD). RVMCS operating in some failure undetected states will produce fatal threats to the safety of flight control system. RVMCS operating in some false alarming states will reduce utility of RVMCS obviously. The results abstracted in this paper can guide reliable VMC and efficient FMD designs. The methods adopted in this paper can also be used to analyze other intelligent systems’ reliability.

  17. Seismic reliability analysis of large electric power systems

    何军; 李杰


    Based on the De. Morgan laws and Boolean simplification, a recursive decomposition method is introduced in this paper to identity the main exclusive safe paths and failed paths of a network. The reliability or the reliability bound of a network can be conveniently expressed as the summation of the joint probabilities of these paths. Under the multivariate normal distribution assumption, a conditioned reliability index method is developed to evaluate joint probabilities of various exclusive safe paths and failed paths, and, finally, the seismic reliability or the reliability bound of an electric power system.Examples given in thc paper show that the method is very simple and provides accurate results in the seismic reliability analysis.

  18. Simulation Approach to Mission Risk and Reliability Analysis Project

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  19. Reliability analysis of ship structure system with multi-defects


    This paper analyzes the influence of multi-defects including the initial distortions,welding residual stresses,cracks and local dents on the ultimate strength of the plate element,and has worked out expressions of reliability calculation and sensitivity analysis of the plate element.Reliability analysis is made for the system with multi-defects plate elements.Failure mechanism,failure paths and the calculating approach to global reliability index are also worked out.After plate elements with multi-defects fail,the formula of reverse node forces which affect the residual structure is deduced,so are the sensitivity expressions of the system reliability index.This ensures calculating accuracy and rationality for reliability analysis,and makes it convenient to find weakness plate elements which affect the reliability of the structure system.Finally,for the validity of the approach proposed,we take the numerical example of a ship cabin to compare and contrast the reliability and the sensitivity analysis of the structure system with multi-defects with those of the structure system with no defects.The approach has implications for the structure design,rational maintenance and renewing strategy.

  20. Requalification of offshore structures. Reliability analysis of platform

    Bloch, A.; Dalsgaard Soerensen, J. [Aalborg Univ. (Denmark)


    A preliminary reliability analysis has been performed for an example platform. In order to model the structural response such that it is possible to calculate reliability indices, approximate quadratic response surfaces have been determined for cross-sectional forces. Based on a deterministic, code-based analysis the elements and joints which can be expected to be the most critical are selected and response surfaces are established for the cross-sectional forces in those. A stochastic model is established for the uncertain variables. The reliability analysis shows that with this stochastic model the smallest reliability indices for elements are about 3.9. The reliability index for collapse (pushover) is estimated to 6.7 and the reliability index for fatigue failure using a crude model is for the expected most critical detail estimated to 3.2, corresponding to the accumulated damage during the design lifetime of the platform. These reliability indices are considered to be reasonable compared with values recommended by e.g. ISO. The most important stochastic variables are found to be the wave height and the drag coefficient (including the model uncertainty related to estimation of wave forces on the platform). (au)

  1. Maritime shipping as a high reliability industry: A qualitative analysis

    Mannarelli, T.; Roberts, K.; Bea, R.


    The maritime oil shipping industry has great public demands for safe and reliable organizational performance. Researchers have identified a set of organizations and industries that operate at extremely high levels of reliability, and have labelled them High Reliability Organizations (HRO). Following the Exxon Valdez oil spill disaster of 1989, public demands for HRO-level operations were placed on the oil industry. It will be demonstrated that, despite enormous improvements in safety and reliability, maritime shipping is not operating as an HRO industry. An analysis of the organizational, environmental, and cultural history of the oil industry will help to provide justification and explanation. The oil industry will be contrasted with other HRO industries and the differences will inform the shortfalls maritime shipping experiences with regard to maximizing reliability. Finally, possible solutions for the achievement of HRO status will be offered.

  2. Reliability Analysis of OMEGA Network and Its Variants

    Suman Lata


    Full Text Available The performance of a computer system depends directly on the time required to perform a basic operation and the number of these basic operations that can be performed concurrently. High performance computing systems can be designed using parallel processing. Parallel processing is achieved by using more than one processors or computers together they communicate with each other to solve a givenproblem. MINs provide better way for the communication between different processors or memory modules with less complexity, fast communication, good fault tolerance, high reliability and low cost. Reliability of a system is the probability that it will successfully perform its intended operations for a given time under stated operating conditions. From the reliability analysis it has beenobserved that addition of one stage to Omega networks provide higher reliability in terms of terminal reliability than the addition of two stages in the corresponding network.

  3. Discrete event simulation versus conventional system reliability analysis approaches

    Kozine, Igor


    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...

  4. Comparison Of Digital Workstations And Conventional Reading For Evaluation Of User Interfaces In Digital Radiology

    McNeill, Kevin M.; Seeley, George W.; Maloney, Kris; Fajardo, Laurie; Kozik, Mark


    The User Interface Study Group at the University of Arizona is investigating the interaction of Radiologists with digital workstations. Using the Arizona Viewing Console we have conducted an experiment to compare a digital workstation with a particular conventional reading process used for cases from a local Health Maintenance Organization. A model consisting of three distinct phases of activity was developed to describe conventional reading process. From this model software was developed for the Arizona Viewing Console to approximate the process. Radiologists were then video taped reading similar sets of cases at each workstation and the tapes were analyzed for frequency of hand movements and time required for each phase of the process. This study provides a comparison between conventional reading and a digital workstation. This paper describes the reading process, the model and its approximation on the digital workstation, as well as the analysis of the video tapes.

  5. A prototype integrated medical workstation environment

    E.M. van Mulligen (Erik); T. Timmers (Teun); F. van den Heuvel (F.); J.H. van Bemmel (Jan)


    markdownabstractAbstract In this paper the requirements, design, and implementation of a prototype integrated medical workstation environment are outlined. The aim of the workstation is to provide user-friendly, task-oriented support for clinicians, based on existing software and data. The prototy

  6. The Electronic Library Workstation--Today.

    Nolte, James


    Describes the components--hardware, software and applications, CD-ROM and online reference resources, and telecommunications links--of an electronic library workstation in use at Clarkson University (Potsdam, New York). Data manipulation, a hypothetical research scenario, and recommended workstation capabilities are also discussed. (MES)

  7. Seismic reliability analysis of urban water distribution network

    Li Jie; Wei Shulin; Liu Wei


    An approach to analyze the seismic reliability of water distribution networks by combining a hydraulic analysis with a first-order reliability method (FORM), is proposed in this paper.The hydraulic analysis method for normal conditions is modified to accommodate the special conditions necessary to perform a seismic hydraulic analysis. In order to calculate the leakage area and leaking flow of the pipelines in the hydraulic analysis method, a new leakage model established from the seismic response analysis of buried pipelines is presented. To validate the proposed approach, a network with 17 nodes and 24 pipelines is investigated in detail. The approach is also applied to an actual project consisting of 463 nodes and 767pipelines. Thee results show that the proposed approach achieves satisfactory results in analyzing the seismic reliability of large-scale water distribution networks.

  8. A Passive System Reliability Analysis for a Station Blackout

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David; Sofu, Tanju; Grelle, Austin


    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passive system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.

  9. Reliability Analysis of Dynamic Stability in Waves

    Søborg, Anders Veldt


    exhibit sufficient characteristics with respect to slope at zero heel (GM value), maximum leverarm, positive range of stability and area below the leverarm curve. The rule-based requirements to calm water leverarm curves are entirely based on experience obtained from vessels in operation and recorded......-4 per ship year such brute force Monte-Carlo simulations are not always feasible due to the required computational resources. Previous studies of dynamic stability of ships in waves typically focused on the capsizing event. In this study the objective is to establish a procedure that can identify...... the distribution of the exceedance probability may be established by an estimation of the out-crossing rate of the "safe set" defined by the utility function. This out-crossing rate will be established using the so-called Madsen's Formula. A bi-product of this analysis is a set of short wave time series...

  10. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    Berzonskis, Arvydas; Sørensen, John Dalsgaard


    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....... of operation and maintenance. The manufacturing of casted drivetrain components, like the main shaft of the wind turbine, commonly result in many smaller defects through the volume of the component with sizes that depend on the manufacturing method. This paper considers the effect of the initial defect present...

  11. VMWare Workstation 6.5


    VMware Workstation 6.5近日正式发布。该版本的最大亮点在于支持虚拟环境中的3D加速,和新增的Unity模式。前者能够给虚拟环境提供更好的性能和表现,而后者能够提供一种在虚拟机和宿主机之间的无缝应用模式。另外,在易用性方面也有了很大的提高,Linux下具备了更好的图形化配置界面,NAT网络性能和文件共享、USB设备性能方面也有了很大的提高。

  12. Analysis on Operation Reliability of Generating Units in 2009



    This paper presents the data on operation reliability indices and relevant analyses toward China's conventional power generating units in 2009. The units brought into the statistical analysis include 100-MW or above thermal generating units, 40-MW or above hydro generating units, and all nuclear generating units. The reliability indices embodied include utilization hours, times and hours of scheduled outages, times and hours of unscheduled outages, equivalent forced outage rate and equivalent availability factor.

  13. Evolving technologies for Space Station Freedom computer-based workstations

    Jensen, Dean G.; Rudisill, Marianne


    Viewgraphs on evolving technologies for Space Station Freedom computer-based workstations are presented. The human-computer computer software environment modules are described. The following topics are addressed: command and control workstation concept; cupola workstation concept; Japanese experiment module RMS workstation concept; remote devices controlled from workstations; orbital maneuvering vehicle free flyer; remote manipulator system; Japanese experiment module exposed facility; Japanese experiment module small fine arm; flight telerobotic servicer; human-computer interaction; and workstation/robotics related activities.

  14. Reliability analysis and initial requirements for FC systems and stacks

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  15. Coverage Modeling and Reliability Analysis Using Multi-state Function


    Fault tree analysis is an effective method for predicting the reliability of a system. It gives a pictorial representation and logical framework for analyzing the reliability. Also, it has been used for a long time as an effective method for the quantitative and qualitative analysis of the failure modes of critical systems. In this paper, we propose a new general coverage model (GCM) based on hardware independent faults. Using this model, an effective software tool can be constructed to detect, locate and recover fault from the faulty system. This model can be applied to identify the key component that can cause the failure of the system using failure mode effect analysis (FMEA).

  16. Reliability analysis of flood defence systems in the Netherlands

    Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for reliability analysis of dike systems has been under de-velopment in the Netherlands. This paper describes the global data requirements for application and the set-up of the models in the Netherlands. The analysis generates an estimate of the probability of sys

  17. Recent advances in computational structural reliability analysis methods

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.


    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  18. On reliability analysis of multi-categorical forecasts

    J. Bröcker


    Full Text Available Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.




    Full Text Available The introduction of pervasive devices and mobile devices has led to immense growth of real time distributed processing. In such context reliability of the computing environment is very important. Reliability is the probability that the devices, links, processes, programs and files work efficiently for the specified period of time and in the specified condition. Distributed systems are available as conventional ring networks, clusters and agent based systems. Reliability of such systems is focused. These networks are heterogeneous and scalable in nature. There are several factors, which are to be considered for reliability estimation. These include the application related factors like algorithms, data-set sizes, memory usage pattern, input-output, communication patterns, task granularity and load-balancing. It also includes the hardware related factors like processor architecture, memory hierarchy, input-output configuration and network. The software related factors concerning reliability are operating systems, compiler, communication protocols, libraries and preprocessor performance. In estimating the reliability of a system, the performance estimation is an important aspect. Reliability analysis is approached using probability.

  20. The development of a reliable amateur boxing performance analysis template.

    Thomson, Edward; Lamb, Kevin; Nicholas, Ceri


    The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.

  1. Reliability analysis of cluster-based ad-hoc networks

    Cook, Jason L. [Quality Engineering and System Assurance, Armament Research Development Engineering Center, Picatinny Arsenal, NJ (United States); Ramirez-Marquez, Jose Emmanuel [School of Systems and Enterprises, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States)], E-mail:


    The mobile ad-hoc wireless network (MAWN) is a new and emerging network scheme that is being employed in a variety of applications. The MAWN varies from traditional networks because it is a self-forming and dynamic network. The MAWN is free of infrastructure and, as such, only the mobile nodes comprise the network. Pairs of nodes communicate either directly or through other nodes. To do so, each node acts, in turn, as a source, destination, and relay of messages. The virtue of a MAWN is the flexibility this provides; however, the challenge for reliability analyses is also brought about by this unique feature. The variability and volatility of the MAWN configuration makes typical reliability methods (e.g. reliability block diagram) inappropriate because no single structure or configuration represents all manifestations of a MAWN. For this reason, new methods are being developed to analyze the reliability of this new networking technology. New published methods adapt to this feature by treating the configuration probabilistically or by inclusion of embedded mobility models. This paper joins both methods together and expands upon these works by modifying the problem formulation to address the reliability analysis of a cluster-based MAWN. The cluster-based MAWN is deployed in applications with constraints on networking resources such as bandwidth and energy. This paper presents the problem's formulation, a discussion of applicable reliability metrics for the MAWN, and illustration of a Monte Carlo simulation method through the analysis of several example networks.

  2. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Jin Zhu


    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  3. Reliability of the Emergency Severity Index: Meta-analysis

    Amir Mirhaghi


    Full Text Available Objectives: Although triage systems based on the Emergency Severity Index (ESI have many advantages in terms of simplicity and clarity, previous research has questioned their reliability in practice. Therefore, the aim of this meta-analysis was to determine the reliability of ESI triage scales. Methods: This metaanalysis was performed in March 2014. Electronic research databases were searched and articles conforming to the Guidelines for Reporting Reliability and Agreement Studies were selected. Two researchers independently examined selected abstracts. Data were extracted in the following categories: version of scale (latest/older, participants (adult/paediatric, raters (nurse, physician or expert, method of reliability (intra/inter-rater, reliability statistics (weighted/unweighted kappa and the origin and publication year of the study. The effect size was obtained by the Z-transformation of reliability coefficients. Data were pooled with random-effects models and a meta-regression was performed based on the method of moments estimator. Results: A total of 19 studies from six countries were included in the analysis. The pooled coefficient for the ESI triage scales was substantial at 0.791 (95% confidence interval: 0.787‒0.795. Agreement was higher with the latest and adult versions of the scale and among expert raters, compared to agreement with older and paediatric versions of the scales and with other groups of raters, respectively. Conclusion: ESI triage scales showed an acceptable level of overall reliability. However, ESI scales require more development in order to see full agreement from all rater groups. Further studies concentrating on other aspects of reliability assessment are needed.

  4. Statistical models and methods for reliability and survival analysis

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo


    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  5. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William


    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

  6. Notes on numerical reliability of several statistical analysis programs

    Landwehr, J.M.; Tasker, Gary D.


    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  7. Distribution System Reliability Analysis for Smart Grid Applications

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  8. Reliability analysis of retaining walls with multiple failure modes

    张道兵; 孙志彬; 朱川曲


    In order to reduce the errors of the reliability of the retaining wall structure in the establishment of function, in the estimation of parameter and algorithm, firstly, two new reliability and stability models of anti-slipping and anti-overturning based on the upper-bound theory of limit analysis were established, and two kinds of failure modes were regarded as a series of systems with multiple correlated failure modes. Then, statistical characteristics of parameters of the retaining wall structure were inferred by maximal entropy principle. At last, the structural reliabilities of single failure mode and multiple failure modes were calculated by Monte Carlo method in MATLAB and the results were compared and analyzed on the sensitivity. It indicates that this method, with a high precision, is not only easy to program and quick in calculation, but also without the limit of nonlinear functions and non-normal random variables. And the results calculated by this method which applies both the limit analysis theory, maximal entropy principle and Monte Carlo method into analyzing the reliability of the retaining wall structures is more scientific, accurate and reliable, in comparison with those calculated by traditional method.

  9. 门诊医生站的功能与效果分析%The function and effect analysis of the clinic doctor workstation

    衣晓燕; 王晓英


    目的:探讨缩短患者等候时间,实现医生对患者诊治电子化,信息化管理的方法.方法:依托医院信息系统,对患者就诊实行先进的计算机网络管理.结果:门诊医生站的应用,提高了工作效率,使医院信息化、数字化上了一个新的台阶,并为医生、患者提供完整的信息.结论:此法适合在有信息化基础的医院推广应用,使医院整体服务水平及管理水平有了大幅度的提高.%Objective: To shorten the waiting time for patients and to achieve the doctor diagnosing electronization, infor-mationize administration to the patients. Methods: Relying on the information system of hospital, to implement the ad-vanced computer network management to patients. Results: The application of the clinic doctors stations increased the work etticiency, enabled the hospital information system to a new level, and provided the complete information of patients. Conclusion:The clinic doctor workstation greatly increases the level of service and management to the hospitals which have the basis of information system.

  10. Reliability Analysis of a Green Roof Under Different Storm Scenarios

    William, R. K.; Stillwell, A. S.


    Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.

  11. Semigroup Method for a Mathematical Model in Reliability Analysis

    Geni Gupur; LI Xue-zhi


    The system which consists of a reliable machine, an unreliable machine and a storage buffer with infinite many workpieces has been studied. The existence of a unique positive time-dependent solution of the model corresponding to the system has been obtained by using C0-semigroup theory of linear operators in functional analysis.

  12. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard


    . A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of the remaining elements after the removal of selected critical elements. The robustness...

  13. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard;


    This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system. A comp...

  14. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.


    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  15. Test-retest reliability of trunk accelerometric gait analysis

    Henriksen, Marius; Lund, Hans; Moe-Nilssen, R


    The purpose of this study was to determine the test-retest reliability of a trunk accelerometric gait analysis in healthy subjects. Accelerations were measured during walking using a triaxial accelerometer mounted on the lumbar spine of the subjects. Six men and 14 women (mean age 35.2; range 18...

  16. A Next Generation BioPhotonics Workstation

    Glückstad, Jesper; Palima, Darwin; Tauro, Sandeep


    We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials....

  17. Colour hard-copy from workstation screens

    Clayton, C. A.

    It is possible to produce a colour print on the DEC LJ250 inkjet printer of either the entire screen or a portion of the screen from VAXstations, DECstations, SUN workstations and the IKON image display. This document describes how to achieve this with each of the above workstations. The IKONPAINT software which is used to produce colour hard-copy from the IKON screen on the inkjet printer is fully documented in SUN/71 and is not described here.

  18. Next Generation BioPhotonics Workstation

    Glückstad, Jesper


    We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials.......We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials....

  19. Conversion of the Aeronautics Interactive Workstation

    Riveras, Nykkita L.


    This summer I am working in the Educational Programs Office. My task is to convert the Aeronautics Interactive Workstation from a Macintosh (Mac) platform to a Personal Computer (PC) platform. The Aeronautics Interactive Workstation is a workstation in the Aerospace Educational Laboratory (AEL), which is one of the three components of the Science, Engineering, Mathematics, and Aerospace Academy (SEMAA). The AEL is a state-of-the-art, electronically enhanced, computerized classroom that puts cutting-edge technology at the fingertips of participating students. It provides a unique learning experience regarding aerospace technology that features activities equipped with aerospace hardware and software that model real-world challenges. The Aeronautics Interactive Workstation, in particular, offers a variety of activities pertaining to the history of aeronautics. When the Aeronautics Interactive Workstation was first implemented into the AEL it was designed with Macromedia Director 4 for a Mac. Today it is being converted to Macromedia DirectorMX2004 for a PC. Macromedia Director is the proven multimedia tool for building rich content and applications for CDs, DVDs, kiosks, and the Internet. It handles the widest variety of media and offers powerful features for building rich content that delivers red results, integrating interactive audio, video, bitmaps, vectors, text, fonts, and more. Macromedia Director currently offers two programmingkripting languages: Lingo, which is Director's own programmingkripting language and JavaScript. In the workstation, Lingo is used in the programming/scripting since it was the only language in use when the workstation was created. Since the workstation was created with an older version of Macromedia Director it hosted significantly different programming/scripting protocols. In order to successfully accomplish my task, the final product required correction of Xtra and programming/scripting errors. I also had to convert the Mac platform

  20. Human Reliability Analysis for Digital Human-Machine Interfaces

    Ronald L. Boring


    This paper addresses the fact that existing human reliability analysis (HRA) methods do not provide guidance on digital human-machine interfaces (HMIs). Digital HMIs are becoming ubiquitous in nuclear power operations, whether through control room modernization or new-build control rooms. Legacy analog technologies like instrumentation and control (I&C) systems are costly to support, and vendors no longer develop or support analog technology, which is considered technologically obsolete. Yet, despite the inevitability of digital HMI, no current HRA method provides guidance on how to treat human reliability considerations for digital technologies.

  1. Modelling application for cognitive reliability and error analysis method

    Fabio De Felice


    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  2. Classification using least squares support vector machine for reliability analysis

    Zhi-wei GUO; Guang-chen BAI


    In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.

  3. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Swain, A.D.


    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  4. Strength Reliability Analysis of Turbine Blade Using Surrogate Models

    Wei Duan


    Full Text Available There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element Method (FEM, a surrogate model and Monte Carlo Simulation (MCS, is applied to solve the blade reliability analysis. Based on the blade finite element parametrical model and the experimental design, two kinds of surrogate models, Polynomial Response Surface (PRS and Artificial Neural Network (ANN, are applied to construct the approximation analytical expressions between the blade responses (including maximum stress and deflection and random input variables, which act as a surrogate of finite element solver to drastically reduce the number of simulations required. Then the surrogate is used for most of the samples needed in the Monte Carlo method and the statistical parameters and cumulative distribution functions of the maximum stress and deflection are obtained by Monte Carlo simulation. Finally, the probabilistic sensitivities analysis, which combines the magnitude of the gradient and the width of the scatter range of the random input variables, is applied to evaluate how much the maximum stress and deflection of the blade are influenced by the random nature of input parameters.

  5. Generating function approach to reliability analysis of structural systems


    The generating function approach is an important tool for performance assessment in multi-state systems. Aiming at strength reliability analysis of structural systems, generating function approach is introduced and developed. Static reliability models of statically determinate, indeterminate systems and fatigue reliability models are built by constructing special generating functions, which are used to describe probability distributions of strength (resistance), stress (load) and fatigue life, by defining composite operators of generating functions and performance structure functions thereof. When composition operators are executed, computational costs can be reduced by a big margin by means of collecting like terms. The results of theoretical analysis and numerical simulation show that the generating function approach can be widely used for probability modeling of large complex systems with hierarchical structures due to the unified form, compact expression, computer program realizability and high universality. Because the new method considers twin loads giving rise to component failure dependency, it can provide a theoretical reference and act as a powerful tool for static, dynamic reliability analysis in civil engineering structures and mechanical equipment systems with multi-mode damage coupling.

  6. Identifying Sources of Difference in Reliability in Content Analysis

    Elizabeth Murphy


    Full Text Available This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD. Transcripts of 10 students in a month-long online asynchronous discussion were coded by two coders using an instrument with two categories, five processes, and 19 indicators of Problem Formulation and Resolution (PFR. Sources of difference were identified in relation to: coders; tasks; and students. Reliability values were calculated at the levels of categories, processes, and indicators. At the most detailed level of coding on the basis of the indicator, findings revealed that the overall level of reliability between coders was .591 when measured with Cohen’s kappa. The difference between tasks at the same level ranged from .349 to .664, and the difference between participants ranged from .390 to .907. Implications for training and research are discussed.

  7. Reliability Analysis of Free Jet Scour Below Dams

    Chuanqi Li


    Full Text Available Current formulas for calculating scour depth below of a free over fall are mostly deterministic in nature and do not adequately consider the uncertainties of various scouring parameters. A reliability-based assessment of scour, taking into account uncertainties of parameters and coefficients involved, should be performed. This paper studies the reliability of a dam foundation under the threat of scour. A model for calculating the reliability of scour and estimating the probability of failure of the dam foundation subjected to scour is presented. The Maximum Entropy Method is applied to construct the probability density function (PDF of the performance function subject to the moment constraints. Monte Carlo simulation (MCS is applied for uncertainty analysis. An example is considered, and there liability of its scour is computed, the influence of various random variables on the probability failure is analyzed.

  8. Modeling and Analysis of Component Faults and Reliability

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter;


    that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates.......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...

  9. Reliability analysis of two unit parallel repairable industrial system

    Mohit Kumar Kakkar


    Full Text Available The aim of this work is to present a reliability and profit analysis of a two-dissimilar parallel unit system under the assumption that operative unit cannot fail after post repair inspection and replacement and there is only one repair facility. Failure and repair times of each unit are assumed to be uncorrelated. Using regenerative point technique various reliability characteristics are obtained which are useful to system designers and industrial managers. Graphical behaviors of mean time to system failure (MTSF and profit function have also been studied. In this paper, some important measures of reliability characteristics of a two non-identical unit standby system model with repair, inspection and post repair are obtained using regenerative point technique.

  10. Analysis of the Reliability of the "Alternator- Alternator Belt" System

    Ivan Mavrin


    Full Text Available Before starting and also during the exploitation of va1ioussystems, it is vety imp011ant to know how the system and itsparts will behave during operation regarding breakdowns, i.e.failures. It is possible to predict the service behaviour of a systemby determining the functions of reliability, as well as frequencyand intensity of failures.The paper considers the theoretical basics of the functionsof reliability, frequency and intensity of failures for the twomain approaches. One includes 6 equal intetvals and the other13 unequal intetvals for the concrete case taken from practice.The reliability of the "alternator- alternator belt" system installedin the buses, has been analysed, according to the empiricaldata on failures.The empitical data on failures provide empirical functionsof reliability and frequency and intensity of failures, that arepresented in tables and graphically. The first analysis perfO!med by dividing the mean time between failures into 6 equaltime intervals has given the forms of empirical functions of fa ilurefrequency and intensity that approximately cotTespond totypical functions. By dividing the failure phase into 13 unequalintetvals with two failures in each interval, these functions indicateexplicit transitions from early failure inte1val into the randomfailure interval, i.e. into the ageing intetval. Functions thusobtained are more accurate and represent a better solution forthe given case.In order to estimate reliability of these systems with greateraccuracy, a greater number of failures needs to be analysed.

  11. Reliability and maintainability analysis of electrical system of drum shearers

    SEYED Hadi Hoseinie; MOHAMMAD Ataei; REZA Khalokakaie; UDAY Kumar


    The reliability and maintainability of electrical system of drum shearer at Parvade.l Coal Mine in central Iran was analyzed. The maintenance and failure data were collected during 19 months of shearer operation. According to trend and serial correlation tests, the data were independent and identically distributed (iid) and therefore the statistical techniques were used for modeling. The data analysis show that the time between failures (TBF) and time to repair (TTR) data obey the lognormal and Weibull 3 parameters distribution respectively. Reliability-based preventive maintenance time intervals for electrical system of the drum shearer were calculated with regard to reliability plot. The reliability-based maintenance intervals for 90%, 80%, 70% and 50% reliability level are respectively 9.91, 17.96, 27.56 and 56.1 h. Also the calculations show that time to repair (TTR) of this system varies in range 0.17-4 h with 1.002 h as mean time to repair (MTTR). There is a 80% chance that the electrical system of shearer of Parvade.l mine repair will be accomplished within 1.45 h.

  12. Reliability analysis method for slope stability based on sample weight

    Zhi-gang YANG


    Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.

  13. Semantic Web for Reliable Citation Analysis in Scholarly Publishing

    Ruben Tous


    Full Text Available Analysis of the impact of scholarly artifacts is constrained by current unreliable practices in cross-referencing, citation discovering, and citation indexing and analysis, which have not kept pace with the technological advances that are occurring in several areas like knowledge management and security. Because citation analysis has become the primary component in scholarly impact factor calculation, and considering the relevance of this metric within both the scholarly publishing value chain and (especially important the professional curriculum evaluation of scholarly professionals, we defend that current practices need to be revised. This paper describes a reference architecture that aims to provide openness and reliability to the citation-tracking lifecycle. The solution relies on the use of digitally signed semantic metadata in the different stages of the scholarly publishing workflow in such a manner that authors, publishers, repositories, and citation-analysis systems will have access to independent reliable evidences that are resistant to forgery, impersonation, and repudiation. As far as we know, this is the first paper to combine Semantic Web technologies and public-key cryptography to achieve reliable citation analysis in scholarly publishing

  14. Reliability test and failure analysis of high power LED packages*

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng


    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 ℃/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing.

  15. Human Modeling Evaluations in Microgravity Workstation and Restraint Development

    Whitmore, Mihriban; Chmielewski, Cynthia; Wheaton, Aneice; Hancock, Lorraine; Beierle, Jason; Bond, Robert L. (Technical Monitor)


    The International Space Station (ISS) will provide long-term missions which will enable the astronauts to live and work, as well as, conduct research in a microgravity environment. The dominant factor in space affecting the crew is "weightlessness" which creates a challenge for establishing workstation microgravity design requirements. The crewmembers will work at various workstations such as Human Research Facility (HRF), Microgravity Sciences Glovebox (MSG) and Life Sciences Glovebox (LSG). Since the crew will spend considerable amount of time at these workstations, it is critical that ergonomic design requirements are integral part of design and development effort. In order to achieve this goal, the Space Human Factors Laboratory in the Johnson Space Center Flight Crew Support Division has been tasked to conduct integrated evaluations of workstations and associated crew restraints. Thus, a two-phase approach was used: 1) ground and microgravity evaluations of the physical dimensions and layout of the workstation components, and 2) human modeling analyses of the user interface. Computer-based human modeling evaluations were an important part of the approach throughout the design and development process. Human modeling during the conceptual design phase included crew reach and accessibility of individual equipment, as well as, crew restraint needs. During later design phases, human modeling has been used in conjunction with ground reviews and microgravity evaluations of the mock-ups in order to verify the human factors requirements. (Specific examples will be discussed.) This two-phase approach was the most efficient method to determine ergonomic design characteristics for workstations and restraints. The real-time evaluations provided a hands-on implementation in a microgravity environment. On the other hand, only a limited number of participants could be tested. The human modeling evaluations provided a more detailed analysis of the setup. The issues identified

  16. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.


    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  17. Fatigue Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune


    In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed thro...... of the natural period, damping ratio, current, stress spectrum and parameters describing the fatigue strength. Further, soil damping is shown to be significant for the Mono-tower.......In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed...

  18. Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm

    Raj Kumar


    Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.

  19. Reliability analysis method applied in slope stability: slope prediction and forecast on stability analysis

    Wenjuan ZHANG; Li CHEN; Ning QU; Hai'an LIANG


    Landslide is one kind of geologic hazards that often happens all over the world. It brings huge losses to human life and property; therefore, it is very important to research it. This study focused in combination between single and regional landslide, traditional slope stability analysis method and reliability analysis method. Meanwhile, methods of prediction of slopes and reliability analysis were discussed.

  20. Reliability analysis based on the losses from failures.

    Todinov, M T


    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  1. A Sensitivity Analysis on Component Reliability from Fatigue Life Computations


    AD-A247 430 MTL TR 92-5 AD A SENSITIVITY ANALYSIS ON COMPONENT RELIABILITY FROM FATIGUE LIFE COMPUTATIONS DONALD M. NEAL, WILLIAM T. MATTHEWS, MARK G...HAGI OR GHANI NUMBI:H(s) Donald M. Neal, William T. Matthews, Mark G. Vangel, and Trevor Rudalevige 9. PERFORMING ORGANIZATION NAME AND ADDRESS lU...Technical Information Center, Cameron Station, Building 5, 5010 Duke Street, Alexandria, VA 22304-6145 2 ATTN: DTIC-FDAC I MIAC/ CINDAS , Purdue


    彭世济; 卢明银; 张达贤


    It is stipulated in the China national document, named"The Economical Appraisal Methods for Construction Projects" that dynamic analysis should dominate the project economical appraisal methods.This paper has set up a dynamic investment forecast model for Yuanbaoshan Surface Coal Mine. Based on this model, the investment reliability using simulation and analytic methods has been analysed, anti the probability that the designed internal rate of return can reach 8.4%, from economic points of view, have been also studied.

  3. Reliability analysis for new technology-based transmitters

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Charpentier, Dominique [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France)


    The reliability analysis of new technology-based transmitters has to deal with specific issues: various interactions between both material elements and functions, undefined behaviours under faulty conditions, several transmitted data, and little reliability feedback. To handle these particularities, a '3-step' model is proposed, based on goal tree-success tree (GTST) approaches to represent both the functional and material aspects, and includes the faults and failures as a third part for supporting reliability analyses. The behavioural aspects are provided by relationship matrices, also denoted master logic diagrams (MLD), with stochastic values which represent direct relationships between system elements. Relationship analyses are then proposed to assess the effect of any fault or failure on any material element or function. Taking these relationships into account, the probabilities of malfunction and failure modes are evaluated according to time. Furthermore, uncertainty analyses tend to show that even if the input data and system behaviour are not well known, these previous results can be obtained in a relatively precise way. An illustration is provided by a case study on an infrared gas transmitter. These properties make the proposed model and corresponding reliability analyses especially suitable for intelligent transmitters (or 'smart sensors').

  4. Analysis and Reliability Performance Comparison of Different Facial Image Features

    J. Madhavan


    Full Text Available This study performs reliability analysis on the different facial features with weighted retrieval accuracy on increasing facial database images. There are many methods analyzed in the existing papers with constant facial databases mentioned in the literature review. There were not much work carried out to study the performance in terms of reliability and also how the method will perform on increasing the size of the database. In this study certain feature extraction methods were analyzed on the regular performance measure and also the performance measures are modified to fit the real time requirements by giving weight ages for the closer matches. In this study four facial feature extraction methods are performed, they are DWT with PCA, LWT with PCA, HMM with SVD and Gabor wavelet with HMM. Reliability of these methods are analyzed and reported. Among all these methods Gabor wavelet with HMM gives more reliability than other three methods performed. Experiments are carried out to evaluate the proposed approach on the Olivetti Research Laboratory (ORL face database.

  5. High-performance signal characterization workstation

    Frampton, Keith R.


    Essex has been involved in quadratic processing research and the design of processors that compute these algorithms for the past 14 years. We are developing a more efficient processor (Labyrinth-IITM) that has higher dynamic range (greater than 100 dB) and enhanced throughput (approximately 70 times faster). Labyrinth-IITM is a unique half-rack integration of non-developmental units that provides the compute power to solve complex signal processing tasks with significantly reduced latency. The architecture is a flexible combination of high-speed laser optics and digital technologies that is readily configured by the customer to perform a variety of functions. One or two signals can be input to the processor for linear or quadratic processing. The new processor is much simpler, more compact, and more flexible than predecessors. This paper presents a description of this new workstation accelerator. The functions generated by this processor are the ambiguity function, Wigner-Ville function, and cyclic spectrum. Other functions that can be represented by two signal inputs can also be generated by this accelerator. Some applications include high resolution spectral analysis, radar waveform processing, signal detection and characterization, geolocation using time and frequency differences of arrival, and direction finding using angle of arrival.

  6. Integrated telemedicine workstation for intercontinental grand rounds

    Willis, Charles E.; Leckie, Robert G.; Brink, Linda; Goeringer, Fred


    The Telemedicine Spacebridge to Moscow was a series of intercontinental sessions sponsored jointly by NASA and the Moscow Academy of Medicine. To improve the quality of medical images presented, the MDIS Project developed a workstation for acquisition, storage, and interactive display of radiology and pathology images. The workstation was based on a Macintosh IIfx platform with a laser digitizer for radiographs and video capture capability for microscope images. Images were transmitted via the Russian Lyoutch Satellite which had only a single video channel available and no high speed data channels. Two workstations were configured -- one for use at the Uniformed Services University of Health Sciences in Bethesda, MD. and the other for use at the Hospital of the Interior in Moscow, Russia. The two workstations were used may times during 16 sessions. As clinicians used the systems, we modified the original configuration to improve interactive use. This project demonstrated that numerous acquisition and output devices could be brought together in a single interactive workstation. The video images were satisfactory for remote consultation in a grand rounds format.

  7. Computer Aided Software Engineering workstation evaluation

    Kotcher, D.A.; Parish, R.B.; Sisson, A.M.; Wenzel, W.A.; Wiancko, B.E.


    This report presents an evaluation of interconnected high performance workstations. The evaluation specifically addresses the benefits to personnel engaged in Computer Aided Software Engineering (CASE) for the design and development of computer software aided by computer workstations. To narrow the scope of the CASE evaluation to a reasonable size, the class of workstations considered was limited to units having the following minimum capabilities: speed to issue 2 to 3 million instructions per second (Mips), 4 megabytes (MB) of central memory, 140 MB of local disk storage, a monitor with 1024 by 960 graphics resolution, and Ethernet compatibility. In addition, software requirements included a virtual memory implementation of the UNIX operating system, the defacto standard networking Transmission Control Protocol and Internet Protocol (TCP/IP), and the network file system (NFS). Support of selected third-party software, such as the TEMPLATE graphics software, and robust tools for software development were also required. These criteria are justified by the use of workstations for maintenance and support of large mainframe based FORTRAN computer programs. The evaluation concluded that workstations are excellent tools for CASE. 1 ref., 1 fig., 6 tabs.




    RHIC has been successfully operated for 5 years as a collider for different species, ranging from heavy ions including gold and copper, to polarized protons. We present a critical analysis of reliability data for RHIC that not only identifies the principal factors limiting availability but also evaluates critical choices at design times and assess their impact on present machine performance. RHIC availability data are typical when compared to similar high-energy colliders. The critical analysis of operations data is the basis for studies and plans to improve RHIC machine availability beyond the 50-60% typical of high-energy colliders.

  9. Using functional analysis diagrams to improve product reliability and cost

    Ioannis Michalakoudis


    Full Text Available Failure mode and effects analysis and value engineering are well-established methods in the manufacturing industry, commonly applied to optimize product reliability and cost, respectively. Both processes, however, require cross-functional teams to identify and evaluate the product/process functions and are resource-intensive, hence their application is mostly limited to large organizations. In this article, we present a methodology involving the concurrent execution of failure mode and effects analysis and value engineering, assisted by a set of hierarchical functional analysis diagram models, along with the outcomes of a pilot application in a UK-based manufacturing small and medium enterprise. Analysis of the results indicates that this new approach could significantly enhance the resource efficiency and effectiveness of both failure mode and effects analysis and value engineering processes.

  10. Mutation Analysis Approach to Develop Reliable Object-Oriented Software

    Monalisa Sarma


    Full Text Available In general, modern programs are large and complex and it is essential that they should be highly reliable in applications. In order to develop highly reliable software, Java programming language developer provides a rich set of exceptions and exception handling mechanisms. Exception handling mechanisms are intended to help developers build robust programs. Given a program with exception handling constructs, for an effective testing, we are to detect whether all possible exceptions are raised and caught or not. However, complex exception handling constructs make it tedious to trace which exceptions are handled and where and which exceptions are passed on. In this paper, we address this problem and propose a mutation analysis approach to develop reliable object-oriented programs. We have applied a number of mutation operators to create a large set of mutant programs with different type of faults. We then generate test cases and test data to uncover exception related faults. The test suite so obtained is applied to the mutant programs measuring the mutation score and hence verifying whether mutant programs are effective or not. We have tested our approach with a number of case studies to substantiate the efficacy of the proposed mutation analysis technique.

  11. Strength Reliability Analysis of Stiffened Cylindrical Shells Considering Failure Correlation

    Xu Bai; Liping Sun; Wei Qin; Yongkun Lv


    The stiffened cylindrical shell is commonly used for the pressure hull of submersibles and the legs of offshore platforms. There are various failure modes because of uncertainty with the structural size and material properties, uncertainty of the calculation model and machining errors. Correlations among failure modes must be considered with the structural reliability of stiffened cylindrical shells. However, the traditional method cannot consider the correlations effectively. The aim of this study is to present a method of reliability analysis for stiffened cylindrical shells which considers the correlations among failure modes. Firstly, the joint failure probability calculation formula of two related failure modes is derived through use of the 2D joint probability density function. Secondly, the full probability formula of the tandem structural system is given with consideration to the correlations among failure modes. At last, the accuracy of the system reliability calculation is verified through use of the Monte Carlo simulation. Result of the analysis shows the failure probability of stiffened cylindrical shells can be gained through adding the failure probability of each mode.

  12. Reliability Analysis of Penetration Systems Using Nondeterministic Methods



    Device penetration into media such as metal and soil is an application of some engineering interest. Often, these devices contain internal components and it is of paramount importance that all significant components survive the severe environment that accompanies the penetration event. In addition, the system must be robust to perturbations in its operating environment, some of which exhibit behavior which can only be quantified to within some level of uncertainty. In the analysis discussed herein, methods to address the reliability of internal components for a specific application system are discussed. The shock response spectrum (SRS) is utilized in conjunction with the Advanced Mean Value (AMV) and Response Surface methods to make probabilistic statements regarding the predicted reliability of internal components. Monte Carlo simulation methods are also explored.

  13. Analytical reliability analysis of soil-water characteristic curve

    Johari A.


    Full Text Available The Soil Water Characteristic Curve (SWCC, also known as the soil water-retention curve, is an important part of any constitutive relationship for unsaturated soils. Deterministic assessment of SWCC has received considerable attention in the past few years. However the uncertainties of the parameters which affect SWCC dictate that the problem is of a probabilistic nature rather than being deterministic. In this research, a Gene Expression Programming (GEP-based SWCC model is employed to assess the reliability of SWCC. For this purpose, the Jointly Distributed Random Variables (JDRV method is used as an analytical method for reliability analysis. All input parameters of the model which are initial void ratio, initial water content, silt and clay contents are set to be stochastic and modelled using truncated normal probability density functions. The results are compared with those of the Monte Carlo (MC simulation. It is shown that the initial water content is the most effective parameter in SWCC.

  14. Optimization Based Efficiencies in First Order Reliability Analysis

    Peck, Jeffrey A.; Mahadevan, Sankaran


    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  15. Issues in benchmarking human reliability analysis methods : a literature review.

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)


    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.


    Hong-Zhong Huang


    Full Text Available Engineering design under uncertainty has gained considerable attention in recent years. A great multitude of new design optimization methodologies and reliability analysis approaches are put forth with the aim of accommodating various uncertainties. Uncertainties in practical engineering applications are commonly classified into two categories, i.e., aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty arises because of unpredictable variation in the performance and processes of systems, it is irreducible even adding more data or knowledge. On the other hand, epistemic uncertainty stems from lack of knowledge of the system due to limited data, measurement limitations, or simplified approximations in modeling system behavior and it can be reduced by obtaining more data or knowledge. More specifically, aleatory uncertainty is naturally represented by a statistical distribution and its associated parameters can be characterized by sufficient data. If, however, the data is limited and can be quantified in a statistical sense, epistemic uncertainty can be considered as an alternative tool in such a situation. Of the several optional treatments for epistemic uncertainty, possibility theory and evidence theory have proved to be the most computationally efficient and stable for reliability analysis and engineering design optimization. This study first attempts to provide a better understanding of uncertainty in engineering design by giving a comprehensive overview of its classifications, theories and design considerations. Then a review is conducted of general topics such as the foundations and applications of possibility theory and evidence theory. This overview includes the most recent results from theoretical research, computational developments and performance improvement of possibility theory and evidence theory with an emphasis on revealing the capability and characteristics of quantifying uncertainty from different perspectives

  17. Reliability and risk analysis data base development: an historical perspective

    Fragola, Joseph R


    Collection of empirical data and data base development for use in the prediction of the probability of future events has a long history. Dating back at least to the 17th century, safe passage events and mortality events were collected and analyzed to uncover prospective underlying classes and associated class attributes. Tabulations of these developed classes and associated attributes formed the underwriting basis for the fledgling insurance industry. Much earlier, master masons and architects used design rules of thumb to capture the experience of the ages and thereby produce structures of incredible longevity and reliability (Antona, E., Fragola, J. and Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings, Rome, Italy, 18-20 October 1993). These rules served so well in producing robust designs that it was not until almost the 19th century that the analysis (Charlton, T.M., A History Of Theory Of Structures In The 19th Century, Cambridge University Press, Cambridge, UK, 1982) of masonry voussoir arches, begun by Galileo some two centuries earlier (Galilei, G. Discorsi e dimostrazioni mathematiche intorno a due nuove science, (Discourses and mathematical demonstrations concerning two new sciences, Leiden, The Netherlands, 1638), was placed on a sound scientific basis. Still, with the introduction of new materials (such as wrought iron and steel) and the lack of theoretical knowledge and computational facilities, approximate methods of structural design abounded well into the second half of the 20th century. To this day structural designers account for material variations and gaps in theoretical knowledge by employing factors of safety (Benvenuto, E., An Introduction to the History of Structural Mechanics, Part II: Vaulted Structures and Elastic Systems, Springer-Verlag, NY, 1991) or codes of practice (ASME Boiler and Pressure Vessel Code, ASME, New York) originally developed in the 19th century (Antona, E., Fragola, J. and


    Dustin Lawrence


    Full Text Available The purpose of this study was to inform decision makers at state and local levels, as well as property owners about the amount of water that can be supplied by rainwater harvesting systems in Texas so that it may be included in any future planning. Reliability of a rainwater tank is important because people want to know that a source of water can be depended on. Performance analyses were conducted on rainwater harvesting tanks for three Texas cities under different rainfall conditions and multiple scenarios to demonstrate the importance of optimizing rainwater tank design. Reliability curves were produced and reflect the percentage of days in a year that water can be supplied by a tank. Operational thresholds were reached in all scenarios and mark the point at which reliability increases by only 2% or less with an increase in tank size. A payback period analysis was conducted on tank sizes to estimate the amount of time it would take to recoup the cost of installing a rainwater harvesting system.

  19. A Bayesian Framework for Reliability Analysis of Spacecraft Deployments

    Evans, John W.; Gallo, Luis; Kaminsky, Mark


    Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.

  20. A Research Roadmap for Computation-Based Human Reliability Analysis

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  1. Reliability and risk analysis using artificial neural networks

    Robinson, D.G. [Sandia National Labs., Albuquerque, NM (United States)


    This paper discusses preliminary research at Sandia National Laboratories into the application of artificial neural networks for reliability and risk analysis. The goal of this effort is to develop a reliability based methodology that captures the complex relationship between uncertainty in material properties and manufacturing processes and the resulting uncertainty in life prediction estimates. The inputs to the neural network model are probability density functions describing system characteristics and the output is a statistical description of system performance. The most recent application of this methodology involves the comparison of various low-residue, lead-free soldering processes with the desire to minimize the associated waste streams with no reduction in product reliability. Model inputs include statistical descriptions of various material properties such as the coefficients of thermal expansion of solder and substrate. Consideration is also given to stochastic variation in the operational environment to which the electronic components might be exposed. Model output includes a probabilistic characterization of the fatigue life of the surface mounted component.

  2. Fifty Years of THERP and Human Reliability Analysis

    Ronald L. Boring


    In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø National Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.

  3. Reliability and Robustness Analysis of the Masinga Dam under Uncertainty

    Hayden Postle-Floyd


    Full Text Available Kenya’s water abstraction must meet the projected growth in municipal and irrigation demand by the end of 2030 in order to achieve the country’s industrial and economic development plan. The Masinga dam, on the Tana River, is the key to meeting this goal to satisfy the growing demands whilst also continuing to provide hydroelectric power generation. This study quantitatively assesses the reliability and robustness of the Masinga dam system under uncertain future supply and demand using probabilistic climate and population projections, and examines how long-term planning may improve the longevity of the dam. River flow and demand projections are used alongside each other as inputs to the dam system simulation model linked to an optimisation engine to maximise water availability. Water availability after demand satisfaction is assessed for future years, and the projected reliability of the system is calculated for selected years. The analysis shows that maximising power generation on a short-term year-by-year basis achieves 80%, 50% and 1% reliability by 2020, 2025 and 2030 onwards, respectively. Longer term optimal planning, however, has increased system reliability to up to 95% in 2020, 80% in 2025, and more than 40% in 2030 onwards. In addition, increasing the capacity of the reservoir by around 25% can significantly improve the robustness of the system for all future time periods. This study provides a platform for analysing the implication of different planning and management of Masinga dam and suggests that careful consideration should be given to account for growing municipal needs and irrigation schemes in both the immediate and the associated Tana River basin.

  4. Human Performance Modeling for Dynamic Human Reliability Analysis

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory


    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  5. Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Enevoldsen, I.; Sørensen, John Dalsgaard;

    In this paper a reliability analysis of a Mono-tower platform is presented. The failure modes, considered, are yelding in the tube cross-sections, and fatigue failure in the butt welds. The fatigue failure mode is investigated with a fatigue model, where the fatigue strength is expressed through SN...... for the fatigue limit state is a significant failure mode for the Mono.tower platform. Further, it is shown for the fatigue failure mode the the largest contributions to the overall uncertainty are due to the damping ratio, the inertia coefficient, the stress concentration factor, the model uncertainties...

  6. Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Enevoldsen, I.; Sørensen, John Dalsgaard;


    In this paper, a reliability analysis of a Mono-tower platform is presented. Te failure modes considered are yielding in the tube cross sections and fatigue failure in the butts welds. The fatigue failrue mode is investigated with a fatigue model, where the fatigue strength is expressed through SN...... that the fatigue limit state is a significant failure mode for the Mono-tower platform. Further, it is shown for the fatigue failure mode that the largest contributions to the overall uncertainty are due to the damping ratio, the inertia coefficient, the stress concentration factor, the model uncertainties...

  7. Fault Diagnosis and Reliability Analysis Using Fuzzy Logic Method

    Miao Zhinong; Xu Yang; Zhao Xiangyu


    A new fuzzy logic fault diagnosis method is proposed. In this method, fuzzy equations are employed to estimate the component state of a system based on the measured system performance and the relationship between component state and system performance which is called as "performance-parameter" knowledge base and constructed by expert. Compared with the traditional fault diagnosis method, this fuzzy logic method can use humans intuitive knowledge and dose not need a precise mapping between system performance and component state. Simulation proves its effectiveness in fault diagnosis. Then, the reliability analysis is performed based on the fuzzy logic method.


    G. W. Parry; J.A Forester; V.N. Dang; S. M. L. Hendrickson; M. Presley; E. Lois; J. Xing


    This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure event (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.

  9. Integration of human reliability analysis into the high consequence process

    Houghton, F.K.; Morzinski, J.


    When performing a hazards analysis (HA) for a high consequence process, human error often plays a significant role in the hazards analysis. In order to integrate human error into the hazards analysis, a human reliability analysis (HRA) is performed. Human reliability is the probability that a person will correctly perform a system-required activity in a required time period and will perform no extraneous activity that will affect the correct performance. Even though human error is a very complex subject that can only approximately be addressed in risk assessment, an attempt must be made to estimate the effect of human errors. The HRA provides data that can be incorporated in the hazard analysis event. This paper will discuss the integration of HRA into a HA for the disassembly of a high explosive component. The process was designed to use a retaining fixture to hold the high explosive in place during a rotation of the component. This tool was designed as a redundant safety feature to help prevent a drop of the explosive. This paper will use the retaining fixture to demonstrate the following HRA methodology`s phases. The first phase is to perform a task analysis. The second phase is the identification of the potential human, both cognitive and psychomotor, functions performed by the worker. During the last phase the human errors are quantified. In reality, the HRA process is an iterative process in which the stages overlap and information gathered in one stage may be used to refine a previous stage. The rationale for the decision to use or not use the retaining fixture and the role the HRA played in the decision will be discussed.

  10. Next Genertation BioPhotonics Workstation

    Bañas, Andrew Rafael; Palima, Darwin; Tauro, Sandeep

    We will outline the specs of our Biophotonics Workstation that can generate up to 100 reconfigurable laser-traps making 3D real-time optical manipulation of advanced structures, cells or tiny particles possible with the use of joysticks or gaming devices. Optically actuated nanoneedles may be fun...

  11. Architecture for a PACS primary diagnosis workstation

    Shastri, Kaushal; Moran, Byron


    A major factor in determining the overall utility of a medical Picture Archiving and Communications (PACS) system is the functionality of the diagnostic workstation. Meyer-Ebrecht and Wendler [1] have proposed a modular picture computer architecture with high throughput and Perry [2] have defined performance requirements for radiology workstations. In order to be clinically useful, a primary diagnosis workstation must not only provide functions of current viewing systems (e.g. mechanical alternators [3,4]) such as acceptable image quality, simultaneous viewing of multiple images, and rapid switching of image banks; but must also provide a diagnostic advantage over the current systems. This includes window-level functions on any image, simultaneous display of multi-modality images, rapid image manipulation, image processing, dynamic image display (cine), electronic image archival, hardcopy generation, image acquisition, network support, and an easy user interface. Implementation of such a workstation requires an underlying hardware architecture which provides high speed image transfer channels, local storage facilities, and image processing functions. This paper describes the hardware architecture of the Siemens Diagnostic Reporting Console (DRC) which meets these requirements.

  12. Tailoring a Human Reliability Analysis to Your Industry Needs

    DeMott, D. L.


    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed

  13. Fatigue Reliability Analysis of Wind Turbine Cast Components

    Hesam Mirzaei Rafsanjani


    Full Text Available The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability to be used for decision-making if additional cost considerations are added. In this paper, a statistical approach is presented based on statistical hypothesis testing and analysis of covariance (ANCOVA which can be applied to compare different groups (manufacturers, suppliers, test facilities, etc. and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress for fatigue assessment are estimated based on the statistical analyses and by introduction of physical, model and statistical uncertainties used for the illustration of reliability assessment.

  14. Inclusion of fatigue effects in human reliability analysis

    Griffith, Candice D. [Vanderbilt University, Nashville, TN (United States); Mahadevan, Sankaran, E-mail: [Vanderbilt University, Nashville, TN (United States)


    The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: >We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. > We discuss the difficulties in defining and measuring fatigue. > We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.

  15. Introduction of a virtual workstation into radiology medical student education.

    Strickland, Colin D; Lowry, Peter A; Petersen, Brian D; Jesse, Mary K


    OBJECTIVE. This article describes the creation of a virtual workstation for use by medical students and implementation of that workstation in the reading room. CONCLUSION. A radiology virtual workstation for medical students was created using OsiriX imaging software to authentically simulate the experience of interacting with cases selected to cover important musculoskeletal imaging diagnoses. A workstation that allows the manipulation and interpretation of complete anonymized DICOM images may enhance the educational experience of medical students.

  16. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Ronald L. Boring


    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  17. Transient Reliability Analysis Capability Developed for CARES/Life

    Nemeth, Noel N.


    The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has

  18. Microcomputers and Workstations in Libraries: Trends and Opportunities.

    Welsch, Erwin K.


    Summarizes opinions of scholars in various disciplines on workstation history, definition, and functions. Networks and configurations for library workstations, including hardware and software recommendations, are described. The impact of workstations on the workplace resulting in task, process, and institutional transformation, is also considered.…

  19. Productivity enhancement and reliability through AutoAnalysis

    Garetto, Anthony; Rademacher, Thomas; Schulz, Kristian


    The decreasing size and increasing complexity of photomask features, driven by the push to ever smaller technology nodes, places more and more challenges on the mask house, particularly in terms of yield management and cost reduction. Particularly challenging for mask shops is the inspection, repair and review cycle which requires more time and skill from operators due to the higher number of masks required per technology node and larger nuisance defect counts. While the measurement throughput of the AIMS™ platform has been improved in order to keep pace with these trends, the analysis of aerial images has seen little advancement and remains largely a manual process. This manual analysis of aerial images is time consuming, dependent on the skill level of the operator and significantly contributes to the overall mask manufacturing process flow. AutoAnalysis, the first application available for the FAVOR® platform, offers a solution to these problems by providing fully automated analysis of AIMS™ aerial images. Direct communication with the AIMS™ system allows automated data transfer and analysis parallel to the measurements. User defined report templates allow the relevant data to be output in a manner that can be tailored to various internal needs and support the requests of your customers. Productivity is significantly improved due to the fast analysis, operator time is saved and made available for other tasks and reliability is no longer a concern as the most defective region is always and consistently captured. In this paper the concept and approach of AutoAnalysis will be presented as well as an update to the status of the project. The benefits arising from the use of AutoAnalysis will be discussed in more detail and a study will be performed in order to demonstrate.

  20. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Clayson, Peter E; Miller, Gregory A


    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue.

  1. Reliability analysis and updating of deteriorating systems with subset simulation

    Schneider, Ronald; Thöns, Sebastian; Straub, Daniel


    Bayesian updating of the system deterioration model. The updated system reliability is then obtained through coupling the updated deterioration model with a probabilistic structural model. The underlying high-dimensional structural reliability problems are solved using subset simulation, which...

  2. Suitability Analysis of Continuous-Use Reliability Growth Projection Models


    exists for all types, shapes, and sizes. The primary focus of this study is a comparison of reliability growth projection models designed for...requirements to use reliability growth models, recent studies have noted trends in reliability failures throughout the DoD. In [14] Dr. Michael a strict exponential distribu- tion was used to stay within their assumptions. In reality, however, reliability growth models often must be used

  3. New Mathematical Derivations Applicable to Safety and Reliability Analysis

    Cooper, J.A.; Ferson, S.


    Boolean logic expressions are often derived in safety and reliability analysis. Since the values of the operands are rarely exact, accounting for uncertainty with the tightest justifiable bounds is important. Accurate determination of result bounds is difficult when the inputs have constraints. One example of a constraint is that an uncertain variable that appears multiple times in a Boolean expression must always have the same value, although the value cannot be exactly specified. A solution for this repeated variable problem is demonstrated for two Boolean classes. The classes, termed functions with unate variables (including, but not limited to unate functions), and exclusive-or functions, frequently appear in Boolean equations for uncertain outcomes portrayed by logic trees (event trees and fault trees).

  4. Applicability of simplified human reliability analysis methods for severe accidents

    Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)


    Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)

  5. Time-dependent reliability analysis and condition assessment of structures

    Ellingwood, B.R. [Johns Hopkins Univ., Baltimore, MD (United States)


    Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process.

  6. Reliability analysis for the quench detection in the LHC machine

    Denz, R; Vergara-Fernández, A


    The Large Hadron Collider (LHC) will incorporate a large amount of superconducting elements that require protection in case of a quench. Key elements in the quench protection system are the electronic quench detectors. Their reliability will have an important impact on the down time as well as on the operational cost of the collider. The expected rates of both false and missed quenches have been computed for several redundant detection schemes. The developed model takes account of the maintainability of the system to optimise the frequency of foreseen checks, and evaluate their influence on the performance of different detection topologies. Seen the uncertainty of the failure rate of the components combined with the LHC tunnel environment, the study has been completed with a sensitivity analysis of the results. The chosen detection scheme and the maintainability strategy for each detector family are given.

  7. A reliability analysis of the revised competitiveness index.

    Harris, Paul B; Houston, John M


    This study examined the reliability of the Revised Competitiveness Index by investigating the test-retest reliability, interitem reliability, and factor structure of the measure based on a sample of 280 undergraduates (200 women, 80 men) ranging in age from 18 to 28 years (M = 20.1, SD = 2.1). The findings indicate that the Revised Competitiveness Index has high test-retest reliability, high inter-item reliability, and a stable factor structure. The results support the assertion that the Revised Competitiveness Index assesses competitiveness as a stable trait rather than a dynamic state.

  8. 矿用束管监测系统工作站的分析比较与发展趋势%Analysis and Comparision of Workstation of Monitoring System of Mine-used Beam Tube and Its Development Trend

    房文杰; 李长录


    介绍了矿用束管监测系统工作站的作用和功能;从硬件控制及辅助软件两个方面对气相色谱仪型束管监测系统工作站与组合型束管监测系统工作站进行了分析比较,得出了气相色谱仪型束管监测系统工作站功能较多、集成化程度较高、使用方便的结论;指出了束管监测系统工作站的发展趋势.%The paper introduced usages and functions of workstation of monitoring system of mine-used beam tube, analyzed and compared workstations of monitoring system of beam tube of gas chromatograph type and combined type in term of hardware control and aided software, and drew a conclusion that the workstation of monitoring system of beam tube of gas chromatograph type has more functions, higher integration degree, easier to use. It pointed out development trend of workstation of monitoring system of beam tube.

  9. Failure Analysis towards Reliable Performance of Aero-Engines

    T. Jayakumar


    Full Text Available Aero-engines are critical components whose reliable performance decides the primary safety of anaircrafthelicopter. This is met by rigorous maintenance schedule with periodic inspection/nondestructive testingof various engine components. In spite of these measures, failure of areo-engines do occur rather frequentlyin comparison to failure of other components. Systematic failure analysis helps one to identify root causeof the failure, thus enabling remedial measures to prevent recurrence of such failures. Turbine blades madeof nickel or cobalt-based alloys are used in aero-engines. These blades are subjected to complex loadingconditions at elevated temperatures. The main causes of failure of blades are attributed to creep, thermalfatigue and hot corrosion. Premature failure of blades in the combustion zone was reported in one of theaero-engines. The engine had both the compressor and the free-turbine in a common shaft. Detailedfailure analysis revealed the presence of creep voids in the blades that failed. Failure of turbine bladeswas also detected in another aero-engine operating in a coastal environment. In this failure, the protectivecoating on the blades was cracked at many locations. Grain boundary spikes were observed on these locations.The primary cause of this failure was the hot corrosion followed by creep damage

  10. Multi-Unit Considerations for Human Reliability Analysis

    St. Germain, S.; Boring, R.; Banaseanu, G.; Akl, Y.; Chatri, H.


    This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact the selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.

  11. Fuzzy Reliability Analysis of the Shaft of a Steam Turbine


    Field surveying shows that the failure of the steam turbine's coupling is due to fatigue that is caused by compound stress. Fuzzy mathematics was applied to get the membership function of the fatigue strength rule. A formula of fuzzy reliability of the coupling was derived and a theory of coupling's fuzzy reliability is set up. The calculating method of the fuzzy reliability is explained by an illustrative example.

  12. Reliability of videotaped observational gait analysis in patients with orthopedic impairments

    Brunnekreef, J.J.; Uden, C. van; Moorsel, S. van; Kooloos, J.G.M.


    BACKGROUND: In clinical practice, visual gait observation is often used to determine gait disorders and to evaluate treatment. Several reliability studies on observational gait analysis have been described in the literature and generally showed moderate reliability. However, patients with orthopedic

  13. Wind energy Computerized Maintenance Management System (CMMS) : data collection recommendations for reliability analysis.

    Peters, Valerie A.; Ogilvie, Alistair; Veers, Paul S.


    This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a list of the data needed to support reliability and availability analysis, and gives specific recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of fielded wind turbines. This report is intended to help the reader develop a basic understanding of what data are needed from a Computerized Maintenance Management System (CMMS) and other data systems, for reliability analysis. The report provides: (1) a list of the data needed to support reliability and availability analysis; and (2) specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and a wider variety of analysis and reporting needs.

  14. Reliable Classification of Geologic Surfaces Using Texture Analysis

    Foil, G.; Howarth, D.; Abbey, W. J.; Bekker, D. L.; Castano, R.; Thompson, D. R.; Wagstaff, K.


    Communication delays and bandwidth constraints are major obstacles for remote exploration spacecraft. Due to such restrictions, spacecraft could make use of onboard science data analysis to maximize scientific gain, through capabilities such as the generation of bandwidth-efficient representative maps of scenes, autonomous instrument targeting to exploit targets of opportunity between communications, and downlink prioritization to ensure fast delivery of tactically-important data. Of particular importance to remote exploration is the precision of such methods and their ability to reliably reproduce consistent results in novel environments. Spacecraft resources are highly oversubscribed, so any onboard data analysis must provide a high degree of confidence in its assessment. The TextureCam project is constructing a "smart camera" that can analyze surface images to autonomously identify scientifically interesting targets and direct narrow field-of-view instruments. The TextureCam instrument incorporates onboard scene interpretation and mapping to assist these autonomous science activities. Computer vision algorithms map scenes such as those encountered during rover traverses. The approach, based on a machine learning strategy, trains a statistical model to recognize different geologic surface types and then classifies every pixel in a new scene according to these categories. We describe three methods for increasing the precision of the TextureCam instrument. The first uses ancillary data to segment challenging scenes into smaller regions having homogeneous properties. These subproblems are individually easier to solve, preventing uncertainty in one region from contaminating those that can be confidently classified. The second involves a Bayesian approach that maximizes the likelihood of correct classifications by abstaining from ambiguous ones. We evaluate these two techniques on a set of images acquired during field expeditions in the Mojave Desert. Finally, the

  15. Reliability Analysis and Modeling of ZigBee Networks

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to

  16. Effectiveness and reliability analysis of emergency measures for flood prevention

    Lendering, K.T.; Jonkman, S.N.; Kok, M.


    During flood events emergency measures are used to prevent breaches in flood defences. However, there is still limited insight in their reliability and effectiveness. The objective of this paper is to develop a method to determine the reliability and effectiveness of emergency measures for flood

  17. Effectiveness and reliability analysis of emergency measures for flood prevention

    Lendering, K.T.; Jonkman, S.N.; Kok, M.


    During flood events emergency measures are used to prevent breaches in flood defences. However, there is still limited insight in their reliability and effectiveness. The objective of this paper is to develop a method to determine the reliability and effectiveness of emergency measures for flood def

  18. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    Bell, B.J.; Swain, A.D.


    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study.

  19. Wind turbine reliability : a database and analysis approach.

    Linsday, James (ARES Corporation); Briand, Daniel; Hill, Roger Ray; Stinebaugh, Jennifer A.; Benjamin, Allan S. (ARES Corporation)


    The US wind Industry has experienced remarkable growth since the turn of the century. At the same time, the physical size and electrical generation capabilities of wind turbines has also experienced remarkable growth. As the market continues to expand, and as wind generation continues to gain a significant share of the generation portfolio, the reliability of wind turbine technology becomes increasingly important. This report addresses how operations and maintenance costs are related to unreliability - that is the failures experienced by systems and components. Reliability tools are demonstrated, data needed to understand and catalog failure events is described, and practical wind turbine reliability models are illustrated, including preliminary results. This report also presents a continuing process of how to proceed with controlling industry requirements, needs, and expectations related to Reliability, Availability, Maintainability, and Safety. A simply stated goal of this process is to better understand and to improve the operable reliability of wind turbine installations.

  20. Advanced response surface method for mechanical reliability analysis

    L(U) Zhen-zhou; ZHAO Jie; YUE Zhu-feng


    Based on the classical response surface method (RSM), a novel RSM using improved experimental points (EPs) is presented for reliability analysis. Two novel points are included in the presented method. One is the use of linear interpolation, from which the total EPs for determining the RS are selected to be closer to the actual failure surface;the other is the application of sequential linear interpolation to control the distance between the surrounding EPs and the center EP, by which the presented method can ensure that the RS fits the actual failure surface in the region of maximum likelihood as the center EPs converge to the actual most probable point (MPP). Since the fitting precision of the RS to the actual failure surface in the vicinity of the MPP, which has significant contribution to the probability of the failure surface being exceeded, is increased by the presented method, the precision of the failure probability calculated by RS is increased as well. Numerical examples illustrate the accuracy and efficiency of the presented method.

  1. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    Sozer, Hasan; Tekinerdogan, Bedir; Aksit, Mehmet; Lemos, de Rogerio; Gacek, Cristina


    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.

  2. Spinal appearance questionnaire: factor analysis, scoring, reliability, and validity testing.

    Carreon, Leah Y; Sanders, James O; Polly, David W; Sucato, Daniel J; Parent, Stefan; Roy-Beaudry, Marjolaine; Hopkins, Jeffrey; McClung, Anna; Bratcher, Kelly R; Diamond, Beverly E


    Cross sectional. This study presents the factor analysis of the Spinal Appearance Questionnaire (SAQ) and its psychometric properties. Although the SAQ has been administered to a large sample of patients with adolescent idiopathic scoliosis (AIS) treated surgically, its psychometric properties have not been fully evaluated. This study presents the factor analysis and scoring of the SAQ and evaluates its psychometric properties. The SAQ and the Scoliosis Research Society-22 (SRS-22) were administered to AIS patients who were being observed, braced or scheduled for surgery. Standard demographic data and radiographic measures including Lenke type and curve magnitude were also collected. Of the 1802 patients, 83% were female; with a mean age of 14.8 years and mean initial Cobb angle of 55.8° (range, 0°-123°). From the 32 items of the SAQ, 15 loaded on two factors with consistent and significant correlations across all Lenke types. There is an Appearance (items 1-10) and an Expectations factor (items 12-15). Responses are summed giving a range of 5 to 50 for the Appearance domain and 5 to 20 for the Expectations domain. The Cronbach's α was 0.88 for both domains and Total score with a test-retest reliability of 0.81 for Appearance and 0.91 for Expectations. Correlations with major curve magnitude were higher for the SAQ Appearance and SAQ Total scores compared to correlations between the SRS Appearance and SRS Total scores. The SAQ and SRS-22 Scores were statistically significantly different in patients who were scheduled for surgery compared to those who were observed or braced. The SAQ is a valid measure of self-image in patients with AIS with greater correlation to curve magnitude than SRS Appearance and Total score. It also discriminates between patients who require surgery from those who do not.

  3. Reliability Modeling and Analysis of SCI Topological Network

    Hongzhe Xu


    Full Text Available The problem of reliability modeling on the Scalable Coherent Interface (SCI rings and topological network is studied. The reliability models of three SCI rings are developed and the factors which influence the reliability of SCI rings are studied. By calculating the shortest path matrix and the path quantity matrix of different types SCI network topology, the communication characteristics of SCI network are obtained. For the situations of the node-damage and edge-damage, the survivability of SCI topological network is studied.

  4. System Reliability Analysis of Redundant Condition Monitoring Systems

    YI Pengxing; HU Youming; YANG Shuzi; WU Bo; CUI Feng


    The development and application of new reliability models and methods are presented to analyze the system reliability of complex condition monitoring systems. The methods include a method analyzing failure modes of a type of redundant condition monitoring systems (RCMS) by invoking failure tree model, Markov modeling techniques for analyzing system reliability of RCMS, and methods for estimating Markov model parameters. Furthermore, a computing case is investigated and many conclusions upon this case are summarized. Results show that the method proposed here is practical and valuable for designing condition monitoring systems and their maintenance.

  5. Accessible microscopy workstation for students and scientists with mobility impairments.

    Duerstock, Bradley S


    An integrated accessible microscopy workstation was designed and developed to allow persons with mobility impairments to control all aspects of light microscopy with minimal human assistance. This system, named AccessScope, is capable of performing brightfield and fluorescence microscopy, image analysis, and tissue morphometry requisite for undergraduate science courses to graduate-level research. An accessible microscope is necessary for students and scientists with mobility impairments to be able to use a microscope independently to better understand microscopical imaging concepts and cell biology. This knowledge is not always apparent by simply viewing a catalog of histological images. The ability to operate a microscope independently eliminates the need to hire an assistant or rely on a classmate and permits one to take practical laboratory examinations by oneself. Independent microscope handling is also crucial for graduate students and scientists with disabilities to perform scientific research. By making a personal computer as the user interface for controlling AccessScope functions, different upper limb mobility impairments could be accommodated by using various computer input devices and assistive technology software. Participants with a range of upper limb mobility impairments evaluated the prototype microscopy workstation. They were able to control all microscopy functions including loading different slides without assistance.

  6. Telerobotics Workstation (TRWS) for Deep Space Habitats

    Mittman, David S.; Howe, Alan S.; Tores, Recaredo J.; Rochlis, Jennifer L.; Hambuchen, Kimberly A.; Demel, Matthew; Chapman, Christopher C.


    On medium- to long-duration human spaceflight missions, latency in communications from Earth could reduce efficiency or hinder local operations, control, and monitoring of the various mission vehicles and other elements. Regardless of the degree of autonomy of any one particular element, a means of monitoring and controlling the elements in real time based on mission needs would increase efficiency and response times for their operation. Since human crews would be present locally, a local means for monitoring and controlling all the various mission elements is needed, particularly for robotic elements where response to interesting scientific features in the environment might need near- instantaneous manipulation and control. One of the elements proposed for medium- and long-duration human spaceflight missions, the Deep Space Habitat (DSH), is intended to be used as a remote residence and working volume for human crews. The proposed solution for local monitoring and control would be to provide a workstation within the DSH where local crews can operate local vehicles and robotic elements with little to no latency. The Telerobotics Workstation (TRWS) is a multi-display computer workstation mounted in a dedicated location within the DSH that can be adjusted for a variety of configurations as required. From an Intra-Vehicular Activity (IVA) location, the TRWS uses the Robot Application Programming Interface Delegate (RAPID) control environment through the local network to remotely monitor and control vehicles and robotic assets located outside the pressurized volume in the immediate vicinity or at low-latency distances from the habitat. The multiple display area of the TRWS allows the crew to have numerous windows open with live video feeds, control windows, and data browsers, as well as local monitoring and control of the DSH and associated systems.

  7. Application of Reliability Analysis for Optimal Design of Monolithic Vertical Wall Breakwaters

    Burcharth, H. F.; Sørensen, John Dalsgaard; Christiani, E.


    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of some of the most important failure modes are described. The failures are sliding and slip surface failure of a rubble mound and a clay foundation. Relevant design...... variables are identified and a reliability-based design optimization procedure is formulated. Results from an illustrative example are given....

  8. Reliability analysis of wind turbines exposed to dynamic loads

    Sørensen, John Dalsgaard


    . Therefore the turbine components should be designed to have sufficient reliability with respect to both extreme and fatigue loads also not be too costly (and safe). This paper presents models for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades...... the reliability of the structural components. Illustrative examples are presented considering uncertainty modeling and reliability assessment for structural wind turbine components exposed to extreme loads and fatigue, respectively.......Wind turbines are exposed to highly dynamic loads that cause fatigue and extreme load effects which are subject to significant uncertainties. Further, reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources...

  9. Operation of Reliability Analysis Center (FY85-87)


    environmental conditions at the time of the reported failure as well as the exact nature of the failure. 4 The diskette format (FMDR-21A) contains...based upon the reliability and maintainability standards and tasks delineated in NAC R&M-STD-ROO010 (Reliability Program Requirements Seleccion ). These...characteristics, environmental conditions at the time of the reported failure, and the exact nature of the failure, which has been categorized as follows

  10. Workstation Designs for a Cis-Lunar Deep Space Habitat

    Howe, A. Scott


    Using the International Standard Payload Rack (ISPR) system, a suite of workstations required for deep space missions have been proposed to fill out habitation functions in an International Space Station (ISS) derived Cis-lunar Deep Space Habitat. This paper introduces the functional layout of the Cis-lunar habitat design, and describes conceptual designs for modular deployable work surfaces, General Maintenance Workstation (GMWS), In-Space Manufacturing Workstation (ISMW), Intra-Vehicular Activity Telerobotics Work Station (IVA-TRWS), and Galley / Wardroom.

  11. reliability reliability


    The design variables for the design of the sla. The design ... The presence of uncertainty in the analysis and de of engineering .... however, for certain complex elements, the methods ..... Standard BS EN 1990, CEN, European Committee for.


    Zhao Jingyi; Zhuoru; Wang Yiqun


    According to the demand of high reliability of the primary cylinder of the hydraulic press,the reliability model of the primary cylinder is built after its reliability analysis.The stress of the primary cylinder is analyzed by finite element software-MARC,and the structure reliability of the cylinder based on stress-strength model is predicted,which would provide the reference to the design.


    C.L. Liu; Z.Z. Lü; Y.L. Xu


    Reliability analysis methods based on the linear damage accumulation law (LDAL) and load-life interference model are studied in this paper. According to the equal probability rule, the equivalent loads are derived, and the reliability analysis method based on load-life interference model and recurrence formula is constructed. In conjunction with finite element analysis (FEA) program, the reliability of an aero engine turbine disk under low cycle fatigue (LCF) condition has been analyzed. The results show the turbine disk is safety and the above reliability analysis methods are feasible.

  14. Reliability Analysis for the Fatigue Limit State of the ASTRID Offshore Platform

    Vrouwenvelder, A.C.W.M.; Gostelie, E.M.


    A reliability analysis with respect to fatigue failure was performed for a concrete gravity platform designed for the Troll field. The reliability analysis was incorporated in the practical design-loop to gain more insight into the complex fatigue problem. In the analysis several parameters relating

  15. Methods for communication-network reliability analysis - Probabilistic graph reduction

    Shooman, Andrew M.; Kershenbaum, Aaron

    The authors have designed and implemented a graph-reduction algorithm for computing the k-terminal reliability of an arbitrary network with possibly unreliable nodes. The two contributions of the present work are a version of the delta-y transformation for k-terminal reliability and an extension of Satyanarayana and Wood's polygon to chain transformations to handle graphs with imperfect vertices. The exact algorithm is faster than or equal to that of Satyanarayana and Wood and the simple algorithm without delta-y and polygon to chain transformations for every problem considered. The exact algorithm runs in linear time on series-parallel graphs and is faster than the above-stated algorithms for huge problems which run in exponential time. The approximate algorithms reduce the computation time for the network reliability problem by two to three orders of magnitude for large problems, while providing reasonably accurate answers in most cases.

  16. Reliability Analysis of Random Vibration Transmission Path Systems

    Wei Zhao


    Full Text Available The vibration transmission path systems are generally composed of the vibration source, the vibration transfer path, and the vibration receiving structure. The transfer path is the medium of the vibration transmission. Moreover, the randomness of transfer path influences the transfer reliability greatly. In this paper, based on the matrix calculus, the generalized second moment technique, and the stochastic finite element theory, the effective approach for the transfer reliability of vibration transfer path systems was provided. The transfer reliability of vibration transfer path system with uncertain path parameters including path mass and path stiffness was analyzed theoretically and computed numerically, and the correlated mathematical expressions were derived. Thus, it provides the theoretical foundation for the dynamic design of vibration systems in practical project, so that most random path parameters can be considered to solve the random problems for vibration transfer path systems, which can avoid the system resonance failure.

  17. Windfarm Generation Assessment for Reliability Analysis of Power Systems

    Barberis Negra, Nicola; Bak-Jensen, Birgitte; Holmstrøm, O.


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  18. Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems

    Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  19. Technical information report: Plasma melter operation, reliability, and maintenance analysis

    Hendrickson, D.W. [ed.


    This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.

  20. Reliability modeling and analysis of smart power systems

    Karki, Rajesh; Verma, Ajit Kumar


    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  1. Embedded mechatronic systems 1 analysis of failures, predictive reliability

    El Hami, Abdelkhalak


    In operation, mechatronics embedded systems are stressed by loads of different causes: climate (temperature, humidity), vibration, electrical and electromagnetic. These stresses in components which induce failure mechanisms should be identified and modeled for better control. AUDACE is a collaborative project of the cluster Mov'eo that address issues specific to mechatronic reliability embedded systems. AUDACE means analyzing the causes of failure of components of mechatronic systems onboard. The goal of the project is to optimize the design of mechatronic devices by reliability. The projec

  2. Architecture-Based Reliability Analysis of Web Services

    Rahmani, Cobra Mariam


    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  3. Windfarm generation assessment for reliability analysis of power systems

    Negra, N.B.; Holmstrøm, O.; Bak-Jensen, B.;


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...

  4. Reliability analysis of common hazardous waste treatment processes

    Waters, R.D. [Vanderbilt Univ., Nashville, TN (United States)


    Five hazardous waste treatment processes are analyzed probabilistically using Monte Carlo simulation to elucidate the relationships between process safety factors and reliability levels. The treatment processes evaluated are packed tower aeration, reverse osmosis, activated sludge, upflow anaerobic sludge blanket, and activated carbon adsorption.

  5. Fiber Access Networks: Reliability Analysis and Swedish Broadband Market

    Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp

    Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.

  6. Statistical Analysis of Human Reliability of Armored Equipment

    LIU Wei-ping; CAO Wei-guo; REN Jing


    Human errors of seven types of armored equipment, which occur during the course of field test, are statistically analyzed. The human error-to-armored equipment failure ratio is obtained. The causes of human errors are analyzed. The distribution law of human errors is acquired. The ratio of human errors and human reliability index are also calculated.

  7. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Bruce Weaver


    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  8. Post-deployment usability evaluation of a radiology workstation

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi A.; Oudkerk, Matthijs; Van Ooijen, Peter M. A.


    Objectives: To determine the number, nature and severity of usability issues radiologists encounter while using a commercially available radiology workstation in clinical practice, and to assess how well the results of a pre-deployment usability evaluation of this workstation generalize to clinical

  9. The biomechanical and physiological effect of two dynamic workstations

    Botter, J.; Burford, E.M.; Commissaris, D.; Könemann, R.; Mastrigt, S.H.V.; Ellegast, R.P.


    The aim of this research paper was to investigate the effect, both biomechanically and physiologically, of two dynamic workstations currently available on the commercial market. The dynamic workstations tested, namely the Treadmill Desk by LifeSpan and the LifeBalance Station by RightAngle, were com

  10. ANL statement of site strategy for computing workstations

    Fenske, K.R. (ed.); Boxberger, L.M.; Amiot, L.W.; Bretscher, M.E.; Engert, D.E.; Moszur, F.M.; Mueller, C.J.; O' Brien, D.E.; Schlesselman, C.G.; Troyer, L.J.


    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is to develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.

  11. Ergonomic assessment of selected workstations on a merchant ship.

    Krystosik-Gromadzińska, Agata


    This study describes some key ergonomic factors within the engine room, navigation bridge and other locations of a merchant ship. Ergonomic assessments were carried out on a crew of a merchant ship. The study examines the importance of factors such as noise, vibration, heat radiation (in machinery areas), psychological stress and ergonomics of the physical arrangement of the navigation bridge. It also addresses the effect of working in confined areas for a long duration and the need to process large amounts of data, decision-making and the influence of specific operating conditions in different areas of a ship. This study includes analysis of workstations, working methods and the burden of environmental factors as well as a discussion of specific marine environmental conditions such as confined working and leisure spaces, long-term family and sociocultural separation, frequent changes in climate and time zones, and temporary physical overload and long-term psychological burdens.

  12. C3 generic workstation: Performance metrics and applications

    Eddy, Douglas R.


    The large number of integrated dependent measures available on a command, control, and communications (C3) generic workstation under development are described. In this system, embedded communications tasks will manipulate workload to assess the effects of performance-enhancing drugs (sleep aids and decongestants), work/rest cycles, biocybernetics, and decision support systems on performance. Task performance accuracy and latency will be event coded for correlation with other measures of voice stress and physiological functioning. Sessions will be videotaped to score non-verbal communications. Physiological recordings include spectral analysis of EEG, ECG, vagal tone, and EOG. Subjective measurements include SWAT, fatigue, POMS and specialized self-report scales. The system will be used primarily to evaluate the effects on performance of drugs, work/rest cycles, and biocybernetic concepts. Performance assessment algorithms will also be developed, including those used with small teams. This system provides a tool for integrating and synchronizing behavioral and psychophysiological measures in a complex decision-making environment.

  13. Preventive Replacement Decisions for Dragline Components Using Reliability Analysis

    Nuray Demirel


    Full Text Available Reliability-based maintenance policies allow qualitative and quantitative evaluation of system downtimes via revealing main causes of breakdowns and discussing required preventive activities against failures. Application of preventive maintenance is especially important for mining machineries since production is highly affected from machinery breakdowns. Overburden stripping operations are one of the integral parts in surface coal mine productions. Draglines are extensively utilized in overburden stripping operations and they achieve earthmoving activities with bucket capacities up to 168 m3. The massive structure and operational severity of these machines increase the importance of performance awareness for individual working components. Research on draglines is rarely observed in the literature and maintenance studies for these earthmovers have been generally ignored. On this basis, this paper offered a comprehensive reliability assessment for two draglines currently operating in the Tunçbilek coal mine and discussed preventive replacement for wear-out components of the draglines considering cost factors.

  14. Reliability Analysis and Standardization of Spacecraft Command Generation Processes

    Meshkat, Leila; Grenander, Sven; Evensen, Ken


    center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.

  15. High-resolution workstations for primary and secondary radiology readings

    Taira, Ricky K.; Simons, Margaret A.; Razavi, Mahmood; Kangarloo, Hooshang; Boechat, Maria I.; Hall, Theodore R.; Chuang, Keh-Shih; Huang, H. K.; Eldredge, Sandra L.


    We have implemented two high resolution workstations within our pediatric radiology PACS module: a two-monitor 2K x 2K station and a six-monitor 1K x 1K station. The 2K x 2K workstation is under evaluation for primary reading of pediatric radiographs from a computed radiography unit. System implementation and evaluation methods are described. Operational efficiency measures of both film and digital systems are reported. This study is our first attempt to integrate a primary viewing station into a busy clinical environment. The 1K x 1K workstation is available 24-hours a day, 7 days a week for fast reviews by referring physicians. Images from a compated radiography system are available at the workstation in about 8 minutes. A digital voice reporting system is being developed to communicate radiology reports from the 2K x 2K workstation to the 1K x 1K secondary review station.

  16. Analysis on Operation Reliability of Generating Units in 2005

    Zuo Xiaowen; Chu Xue


    @@ The weighted average equivalent availability factor of thermal power units in 2005 was 92.34%, an increase of 0.64 percentage points as compared to that in 2004. The average equivalent availability factor in 2005 was 92.22%, a decrease of 0.95 percentage points as compared to that in 2004. The nationwide operation reliability of generating units in 2005 was analyzed completely in this paper.

  17. Reliability Analysis for Tunnel Supports System by Using Finite Element Method

    E. Bukaçi


    Full Text Available Reliability analysis is a method that can be used in almost any geotechnical engineering problem. Using this method requires the knowledge of parameter uncertainties, which can be expressed by their standard deviation value. By performing reliability analysis to tunnel supports design, can be obtained a range of safety factors and by using them, probability of failure can be calculated. Problem becomes more complex when this analysis is performed for numerical methods, such as Finite Element Method. This paper gives a solution to how reliability analysis can be performed to design tunnel supports, by using Point Estimate Method to calculate reliability index. As a case study, is chosen one of the energy tunnels at Fan Hydropower plant, in Rrëshen Albania. As results, values of factor of safety and probability of failure are calculated. Also some suggestions using reliability analysis with numerical methods are given.

  18. Reliability importance analysis of Markovian systems at steady state using perturbation analysis

    Phuc Do Van [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France); Barros, Anne [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)], E-mail:; Berenguer, Christophe [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)


    Sensitivity analysis has been primarily defined for static systems, i.e. systems described by combinatorial reliability models (fault or event trees). Several structural and probabilistic measures have been proposed to assess the components importance. For dynamic systems including inter-component and functional dependencies (cold spare, shared load, shared resources, etc.), and described by Markov models or, more generally, by discrete events dynamic systems models, the problem of sensitivity analysis remains widely open. In this paper, the perturbation method is used to estimate an importance factor, called multi-directional sensitivity measure, in the framework of Markovian systems. Some numerical examples are introduced to show why this method offers a promising tool for steady-state sensitivity analysis of Markov processes in reliability studies.

  19. The Next Generation BioPhotonics Workstation

    Bañas, Andrew Rafael

    light has allowed far more interactive applications such as delivering tailored and localized optical landscapes for stimulating, photo-activating or performing micro-surgery on cells or tissues. In addition to applications possible with light’s interaction on biological samples, lights ability...... to manipulate matter, i.e. optical trapping, brings in a wider tool set in microbiological experiments. Fabricated microscopic tools, such as those constructed using two photon polymerization and other recent nano and microfabrication processes, in turn, allows more complex interactions at the cellular level....... It is therefore important to study efficient beam shaping methods, their use in optical trapping and manipulation, and the design of “microtools” for specific microbiological applications. Such studies are performed in our BioPhotonics Workstation (BWS). Hence the further development of the BWS is also crucial...

  20. Digital workstation for Venus topographic mapping

    Poehler, Paul; Haag, Nils N.; Maupin, Jerry A.; Howington-Kraus, Annie E.; Wu, Sherman S.


    A digital workstation was developed and is currently at the U.S. Geological Survey (USGS) in Flagstaff, Arizona to be used for Venus topographic mapping. The system is based on a mapping and geocoding image correlation (GIS MAGIC) system developed by Science Applications International Corporation (SAIC) for the creation of precisely geocoded imagery data bases for both optical and synthetic aperture radar (SAR) imagery. A multitude of data from various sources has been processed, including conventional aerial photographs, airborne and orbital SAR, and Spot. This paper covers the GIS MAGIC development history, hardware/software features and capabilities. Also covered are the types of modifications required to accommodate Venus radar data and results which can be achieved using the GIS MAGIC System.

  1. Reliability Analysis of Bearing Capacity of Large-Diameter Piles under Osterberg Test

    Lei Nie


    Full Text Available This study gives the reliability analysis of bearing capacity of large-diameter piles under osterberg test. The limit state equation of dimensionless random variables is utilized in the reliability analysis of vertical bearing capacity of large-diameter piles based on Osterberg loading tests. And the reliability index and the resistance partial coefficient under the current specifications are calculated using calibration method. The results show: the reliable index of large-diameter piles is correlated with the load effect ratio and is smaller than the ordinary piles; resistance partial coefficient of 1.53 is proper in design of large-diameter piles.

  2. Analysis of Syetem Reliability in Manufacturing Cell Based on Triangular Fuzzy Number

    ZHANG Caibo; HAN Botang; SUN Changsen; XU Chunjie


    Due to lacking of test-data and field-data in reliability research during the design stage of manufacturing cell system. The degree of manufacturing cell system reliability research is increased. In order to deal with the deficient data and the uncertainty occurred from analysis and judgment, the paper discussed a method for studying reliability of manufacturing cell system through the analysis of fuzzy fault tree, which was based on triangular fuzzy number. At last, calculation case indicated that it would have great significance for ascertaining reliability index, maintenance and establishing keeping strategy towards manufacturing cell system.

  3. A survey on reliability and safety analysis techniques of robot systems in nuclear power plants

    Eom, H.S.; Kim, J.H.; Lee, J.C.; Choi, Y.R.; Moon, S.S


    The reliability and safety analysis techniques was surveyed for the purpose of overall quality improvement of reactor inspection system which is under development in our current project. The contents of this report are : 1. Reliability and safety analysis techniques suvey - Reviewed reliability and safety analysis techniques are generally accepted techniques in many industries including nuclear industry. And we selected a few techniques which are suitable for our robot system. They are falut tree analysis, failure mode and effect analysis, reliability block diagram, markov model, combinational method, and simulation method. 2. Survey on the characteristics of robot systems which are distinguished from other systems and which are important to the analysis. 3. Survey on the nuclear environmental factors which affect the reliability and safety analysis of robot system 4. Collection of the case studies of robot reliability and safety analysis which are performed in foreign countries. The analysis results of this survey will be applied to the improvement of reliability and safety of our robot system and also will be used for the formal qualification and certification of our reactor inspection system.

  4. 基于CUDA的医学影像数据处理工作站的配置方法%The Configuration of Medical Imaging Data Analysis Workstation Based on CUDA

    王飞; 高嵩


    Objeerive: To implement the CUDA (Compute Unified Device Architecture) in conventional personal computers, in order to improve the computing power of personal computer and enable it taking on the task of disposing a mass of medical image data. Methods: CUDA is a calculation architecture developed by NVIDIA Corporation, which introduces the multithreading calculation to many new fields besides imaging display and upgrades the calculation power of personal computer. In the beginning, we install a NVIDID display card supporting CUDA in the computer with Windows XP, then, download and install CUDA Driver, toolkit, SDK, Visual Studio and CUDA VS wizard in order, let CPU work together with GPU, and parallelize the part of program which can be managed paralleling. These steps will dispose operations once with lots of threads on GPU, instead of repeating them again and again on CPU. Results: CUDA can be installed on the computer with NVIDIA display card supporting CUDA, while the configuration is complex. After finishing installation and passing the test, a large number of stream processors in GPU may be used in the process of medical imaging data analysis. Conclusions: A conventional personal computer may be used as a cost-effective parallel medical imaging data analysis workstation after implement CUDA in it.%目的:以个人电脑为平台,结合CUDA(Compute Unified Device Architecture,统一计算设备架构)以显著提高个人电脑的计算能力,使其能够承担大运算量医学影像数据处理任务.方法:CUDA是NVIDIA公司推出的一款运算模型,把GPU多线程并行性能引入除图像显示之外的领域,可以大幅提升个人电脑的运算能力.在装有Windows XP的电脑上安装支持CUDA的NVIDIA显卡,然后依次下载安装CUDA Driver,toolkit,SDK,Visual Studio及CUDA VS wizard x32软件.让CPU和GPU协同工作,把程序中可以并行处理的部分并行化,使原来在CPU上只能顺序处理的大量循环计算,可以在GPU上

  5. Acquisition and statistical analysis of reliability data for I and C parts in plant protection system

    Lim, T. J.; Byun, S. S.; Han, S. H.; Lee, H. J.; Lim, J. S.; Oh, S. J.; Park, K. Y.; Song, H. S. [Soongsil Univ., Seoul (Korea)


    This project has been performed in order to construct I and C part reliability databases for detailed analysis of plant protection system and to develop a methodology for analysing trip set point drifts. Reliability database for the I and C parts of plant protection system is required to perform the detailed analysis. First, we have developed an electronic part reliability prediction code based on MIL-HDBK-217F. Then we have collected generic reliability data for the I and C parts in plant protection system. Statistical analysis procedure has been developed to process the data. Then the generic reliability database has been constructed. We have also collected plant specific reliability data for the I and C parts in plant protection system for YGN 3,4 and UCN 3,4 units. Plant specific reliability database for I and C parts has been developed by the Bayesian procedure. We have also developed an statistical analysis procedure for set point drift, and performed analysis of drift effects for trip set point. The basis for the detailed analysis can be provided from the reliability database for the PPS I and C parts. The safety of the KSNP and succeeding NPPs can be proved by reducing the uncertainty of PSA. Economic and efficient operation of NPP can be possible by optimizing the test period to reduce utility's burden. 14 refs., 215 figs., 137 tabs. (Author)

  6. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    Taheriyoun, Masoud; Moradinejad, Saber


    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  7. Non-probabilistic fuzzy reliability analysis of pile foundation stability by interval theory


    Randomness and fuzziness are among the attributes of the influential factors for stability assessment of pile foundation.According to these two characteristics, the triangular fuzzy number analysis approach was introduced to determine the probability-distributed function of mechanical parameters. Then the functional function of reliability analysis was constructed based on the study of bearing mechanism of pile foundation, and the way to calculate interval values of the functional function was developed by using improved interval-truncation approach and operation rules of interval numbers. Afterwards, the non-probabilistic fuzzy reliability analysis method was applied to assessing the pile foundation, from which a method was presented for nonprobabilistic fuzzy reliability analysis of pile foundation stability by interval theory. Finally, the probability distribution curve of nonprobabilistic fuzzy reliability indexes of practical pile foundation was concluded. Its failure possibility is 0.91%, which shows that the pile foundation is stable and reliable.

  8. Structural Reliability Analysis for Implicit Performance with Legendre Orthogonal Neural Network Method

    Lirong Sha; Tongyu Wang


    In order to evaluate the failure probability of a complicated structure, the structural responses usually need to be estimated by some numerical analysis methods such as finite element method ( FEM) . The response surface method ( RSM) can be used to reduce the computational effort required for reliability analysis when the performance functions are implicit. However, the conventional RSM is time⁃consuming or cumbersome if the number of random variables is large. This paper proposes a Legendre orthogonal neural network ( LONN)⁃based RSM to estimate the structural reliability. In this method, the relationship between the random variables and structural responses is established by a LONN model. Then the LONN model is connected to a reliability analysis method, i.e. first⁃order reliability methods (FORM) to calculate the failure probability of the structure. Numerical examples show that the proposed approach is applicable to structural reliability analysis, as well as the structure with implicit performance functions.

  9. Reliability and Sensitivity Analysis of Cast Iron Water Pipes for Agricultural Food Irrigation

    Yanling Ni


    Full Text Available This study aims to investigate the reliability and sensitivity of cast iron water pipes for agricultural food irrigation. The Monte Carlo simulation method is used for fracture assessment and reliability analysis of cast iron pipes for agricultural food irrigation. Fracture toughness is considered as a limit state function for corrosion affected cast iron pipes. Then the influence of failure mode on the probability of pipe failure has been discussed. Sensitivity analysis also is carried out to show the effect of changing basic parameters on the reliability and life time of the pipe. The analysis results show that the applied methodology can consider different random variables for estimating of life time of the pipe and it can also provide scientific guidance for rehabilitation and maintenance plans for agricultural food irrigation. In addition, the results of the failure and reliability analysis in this study can be useful for designing of more reliable new pipeline systems for agricultural food irrigation.

  10. Reliability of the ATD Angle in Dermatoglyphic Analysis.

    Brunson, Emily K; Hohnan, Darryl J; Giovas, Christina M


    The "ATD" angle is a dermatoglyphic trait formed by drawing lines between the triradii below the first and last digits and the most proximal triradius on the hypothenar region of the palm. This trait has been widely used in dermatoglyphic studies, but several researchers have questioned its utility, specifically whether or not it can be measured reliably. The purpose of this research was to examine the measurement reliability of this trait. Finger and palm prints were taken using the carbon paper and tape method from the right and left hands of 100 individuals. Each "ATD" angle was read twice, at different times, by Reader A, using a goniometer and a magnifying glass, and three times by a Reader B, using Adobe Photoshop. Inter-class correlation coefficients were estimated for the intra- and inter-reader measurements of the "ATD" angles. Reader A was able to quantify ATD angles on 149 out of 200 prints (74.5%), and Reader B on 179 out of 200 prints (89.5%). Both readers agreed on whether an angle existed on a print 89.8% of the time for the right hand and 78.0% for the left. Intra-reader correlations were 0.97 or greater for both readers. Inter-reader correlations for "ATD" angles measured by both readers ranged from 0.92 to 0.96. These results suggest that the "ATD" angle can be measured reliably, and further imply that measurement using a software program may provide an advantage over other methods.

  11. Windfarm Generation Assessment for Reliability Analysis of Power Systems

    Barberis Negra, Nicola; Bak-Jensen, Birgitte; Holmstrøm, O.


    a significant role in this assessment and different models have been created for it, but a representation which includes all of them has not been developed yet. This paper deals with this issue. First, a list of nine influencing Factors is presented and discussed. Secondly, these Factors are included...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  12. Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems

    Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte


    a significant role in this assessment and different models have been created for it, but a representation which includes all of them has not been developed yet. This paper deals with this issue. First, a list of nine influencing Factors is presented and discussed. Secondly, these Factors are included...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  13. Reliability Analysis of Timber Structures through NDT Data Upgrading

    Sousa, Hélder; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning

    for reliability calculation. In chapter 4, updating methods are conceptualized and defined. Special attention is drawn upon Bayesian methods and its implementation. Also a topic for updating based in inspection of deterioration is provided. State of the art definitions and proposed measurement indices......The first part of this document presents, in chapter 2, a description of timber characteristics and common used NDT and MDT for timber elements. Stochastic models for timber properties and damage accumulation models are also referred. According to timber’s properties a framework is proposed...

  14. A disjoint algorithm for seismic reliability analysis of lifeline networks


    The algorithm is based on constructing a disjoin kg t set of the minimal paths in a network system. In this paper,cubic notation was used to describe the logic function of a network in a well-balanced state, and then the sharp-product operation was used to construct the disjoint minimal path set of the network. A computer program has been developed, and when combined with decomposition technology, the reliability of a general lifeline network can be effectively and automatically calculated.

  15. Reliability and maintenance analysis of the CERN PS booster

    Staff, P S B


    The PS Booster Synchrotron being a complex accelerator with four superposed rings and substantial additional equipment for beam splitting and recombination, doubts were expressed at the time of project authorization as to its likely operational reliability. For 1975 and 1976, the average down time was 3.2% (at least one ring off) or 1.5% (all four rings off). The items analysed are: operational record, design features, maintenance, spare parts policy, operating temperature, effects of thunderstorms, fault diagnostics, role of operations staff and action by experts. (15 refs).

  16. Reliability analysis of the bulk cargo loading system including dependent components

    Blokus-Roszkowska, Agnieszka


    In the paper an innovative approach to the reliability analysis of multistate series-parallel systems assuming their components' dependency is presented. The reliability function of a multistate series system with components dependent according to the local load sharing rule is determined. Linking these results for series systems with results for parallel systems with independent components, we obtain the reliability function of a multistate series-parallel system assuming dependence of components' departures from the reliability states subsets in series subsystem and assuming independence between these subsystems. As a particular case, the reliability function of a multistate series-parallel system composed of dependent components having exponential reliability functions is fixed. Theoretical results are applied practically to the reliability evaluation of a bulk cargo transportation system, which main area is to load bulk cargo on board the ships. The reliability function and other reliability characteristics of the loading system are determined in case its components have exponential reliability functions with interdependent departures rates from the subsets of their reliability states. Finally, the obtained results are compared with results for the bulk cargo transportation system composed of independent components.

  17. Using a Hybrid Cost-FMEA Analysis for Wind Turbine Reliability Analysis

    Nacef Tazi


    Full Text Available Failure mode and effects analysis (FMEA has been proven to be an effective methodology to improve system design reliability. However, the standard approach reveals some weaknesses when applied to wind turbine systems. The conventional criticality assessment method has been criticized as having many limitations such as the weighting of severity and detection factors. In this paper, we aim to overcome these drawbacks and develop a hybrid cost-FMEA by integrating cost factors to assess the criticality, these costs vary from replacement costs to expected failure costs. Then, a quantitative comparative study is carried out to point out average failure rate, main cause of failure, expected failure costs and failure detection techniques. A special reliability analysis of gearbox and rotor-blades are presented.

  18. ESCRIME: testing bench for advanced operator workstations in future plants

    Poujol, A.; Papin, B. [CEA Centre d`Etudes Nucleaires de Cadarache, 13 - Saint-Paul-lez-Durance (France)


    The problem of optimal task allocation between man and computer for the operation of nuclear power plants is of major concern for the design of future plants. As the increased level of automation induces the modification of the tasks actually devoted to the operator in the control room, it is very important to anticipate these consequences at the plant design stage. The improvement of man machine cooperation is expected to play a major role in minimizing the impact of human errors on plant safety. The CEA has launched a research program concerning the evolution of the plant operation in order to optimize the efficiency of the human/computer systems for a better safety. The objective of this program is to evaluate different modalities of man-machine share of tasks, in a representative context. It relies strongly upon the development of a specific testing facility, the ESCRIME work bench, which is presented in this paper. It consists of an EDF 1300MWe PWR plant simulator connected to an operator workstation. The plant simulator model presents at a significant level of details the instrumentation and control of the plant and the main connected circuits. The operator interface is based on the generalization of the use of interactive graphic displays, and is intended to be consistent to the tasks to be performed by the operator. The functional architecture of the workstation is modular, so that different cooperation mechanisms can be implemented within the same framework. It is based on a thorough analysis and structuration of plant control tasks, in normal as well as in accident situations. The software architecture design follows the distributed artificial intelligence approach. Cognitive agents cooperate in order to operate the process. The paper presents the basic principles and the functional architecture of the test bed and describes the steps and the present status of the program. (author).

  19. Investigation for Ensuring the Reliability of the MELCOR Analysis Results

    Sung, Joonyoung; Maeng, Yunhwan; Lee, Jaeyoung [Handong Global Univ., Pohang (Korea, Republic of)


    Flow rate could be also main factor to be proven because it is in charge of a role which takes thermal balance through heat transfer in inner side of fuel assembly. Some problems about a reliability of MELCOR results could be posed in the 2{sup nd} technical report of NSRC project. In order to confirm whether MELCOR results are dependable, experimental data of Sandia Fuel Project 1 phase were used to be compared to be a reference. In Spent Fuel Pool (SFP) severe accident, especially in case of boil-off, partial loss of coolant accident, and complete loss of coolant accident; heat source and flow rate could be main points to analyze the MELCOR results. Heat source might be composed as decay heat and oxidation heat. Because heat source makes it possible to lead a zirconium fire situation if it is satisfied that heat accumulates in spent fuel rod and then cladding temperature could be raised continuously to be generated an oxidation heat, this might be a main factor to be confirmed. This work was proposed to investigate reliability of MELCOR results in order to confirm physical phenomena if SFP severe accident is occurred. Almost results showed that MELCOR results were significantly different by minute change of main parameter in identical condition. Therefore it could be necessary that oxidation coefficients have to be chosen as value to delineate real phenomena as possible.

  20. Reliability analysis on a shell and tube heat exchanger

    Lingeswara, S.; Omar, R.; Mohd Ghazi, T. I.


    A shell and tube heat exchanger reliability was done in this study using past history data from a carbon black manufacturing plant. The heat exchanger reliability study is vital in all related industries as inappropriate maintenance and operation of the heat exchanger will lead to major Process Safety Events (PSE) and loss of production. The overall heat exchanger coefficient/effectiveness (Uo) and Mean Time between Failures (MTBF) were analyzed and calculated. The Aspen and down time data was taken from a typical carbon black shell and tube heat exchanger manufacturing plant. As a result of the Uo calculated and analyzed, it was observed that the Uo declined over a period caused by severe fouling and heat exchanger limitation. This limitation also requires further burn out period which leads to loss of production. The MTBF calculated is 649.35 hours which is very low compared to the standard 6000 hours for the good operation of shell and tube heat exchanger. The guidelines on heat exchanger repair, preventive and predictive maintenance was identified and highlighted for better heat exchanger inspection and repair in the future. The fouling of heat exchanger and the production loss will be continuous if proper heat exchanger operation and repair using standard operating procedure is not followed.

  1. Development of a Pamphlet Targeting Computer Workstation Ergonomics

    Faraci, Jennifer S.


    With the increased use of computers throughout Goddard Space Flight Center, the Industrial Hygiene Office (IHO) has observed a growing trend in the number of health complaints attributed to poor computer workstation setup. A majority of the complaints has centered around musculoskeletal symptoms, including numbness, pain, and tingling in the upper extremities, shoulders, and neck. Eye strain and headaches have also been reported. In some cases, these symptoms can lead to chronic conditions such as repetitive strain injuries (RSI's). In an effort to prevent or minimize the frequency of these symptoms among the GSFC population, the IHO conducts individual ergonomic workstation evaluations and ergonomics training classes upon request. Because of the extensive number of computer workstations at GSFC, and the limited amount of manpower which the Industrial Hygiene staff could reasonably allocate to conduct workstation evaluations and employee training, a pamphlet was developed with a two-fold purpose: (1) to educate the GSFC population about the importance of ergonomically-correct computer workstation setup and the potential effects of a poorly configured workstation; and (2) to enable employees to perform a general assessment of their own workstations and make any necessary modifications for proper setup.

  2. Methodology for reliability allocation based on fault tree analysis and dualistic contrast

    TONG Lili; CAO Xuewu


    Reliability allocation is a difficult multi-objective optimization problem.This paper presents a methodology for reliability allocation that can be applied to determine the reliability characteristics of reactor systems or subsystems.The dualistic contrast,known as one of the most powerful tools for optimization problems,is applied to the reliability allocation model of a typical system in this article.And the fault tree analysis,deemed to be one of the effective methods of reliability analysis,is also adopted.Thus a failure rate allocation model based on the fault tree analysis and dualistic contrast is achieved.An application on the emergency diesel generator in the nuclear power plant is given to illustrate the proposed method.

  3. Reliablity analysis of gravity dams by response surface method

    Humar, Nina; Kryžanowski, Andrej; Brilly, Mitja; Schnabl, Simon


    A dam failure is one of the most important problems in dam industry. Since the mechanical behavior of dams is usually a complex phenomenon existing classical mathematical models are generally insufficient to adequately predict the dam failure and thus the safety of dams. Therefore, numerical reliability methods are often used to model such a complex mechanical phenomena. Thus, the main purpose of the present paper is to present the response surface method as a powerful mathematical tool used to study and foresee the dam safety considering a set of collected monitoring data. The derived mathematical model is applied to a case study, the Moste dam, which is the highest concrete gravity dam in Slovenia. Based on the derived model, the ambient/state variables are correlated with the dam deformation in order to gain a forecasting tool able to define the critical thresholds for dam management.

  4. Reliability of three-dimensional gait analysis in cervical spondylotic myelopathy.

    McDermott, Ailish


    Gait impairment is one of the primary symptoms of cervical spondylotic myelopathy (CSM). Detailed assessment is possible using three-dimensional gait analysis (3DGA), however the reliability of 3DGA for this population has not been established. The aim of this study was to evaluate the test-retest reliability of temporal-spatial, kinematic and kinetic parameters in a CSM population.


    R.K. Agnihotri


    Full Text Available The present paper deals with the reliability analysis of a system of boiler used in garment industry.The system consists of a single unit of boiler which plays an important role in garment industry. Usingregenerative point technique with Markov renewal process various reliability characteristics of interest areobtained.

  6. Convergence among Data Sources, Response Bias, and Reliability and Validity of a Structured Job Analysis Questionnaire.

    Smith, Jack E.; Hakel, Milton D.


    Examined are questions pertinent to the use of the Position Analysis Questionnaire: Who can use the PAQ reliably and validly? Must one rely on trained job analysts? Can people having no direct contact with the job use the PAQ reliably and validly? Do response biases influence PAQ responses? (Author/KC)

  7. Risk and reliability analysis theory and applications : in honor of Prof. Armen Der Kiureghian


    This book presents a unique collection of contributions from some of the foremost scholars in the field of risk and reliability analysis. Combining the most advanced analysis techniques with practical applications, it is one of the most comprehensive and up-to-date books available on risk-based engineering. All the fundamental concepts needed to conduct risk and reliability assessments are covered in detail, providing readers with a sound understanding of the field and making the book a powerful tool for students and researchers alike. This book was prepared in honor of Professor Armen Der Kiureghian, one of the fathers of modern risk and reliability analysis.

  8. Electronic controls and displays for a Space Station workstation

    Busquets, A. M.; Parrish, R. V.; Hogge, T. W.


    A workstation to serve as a man/machine interface for one of the test beds used in the NASA-Space Station development effort is described which will also serve as a demonstrator of the advanced technologies anticipated for the Space Station workstation. Thin-film electroluminescent flat-panels may replace the presently used CRTs for image generation, and the development of multifunctional controls, advanced graphic generators, and videodisc technology are considered. A generalized window management algorithm will control the large volume of information required, and conventional office automation tools such as spread sheets and database managers will be applied to workstation image management.

  9. Reliability analysis of repairable systems using system dynamics modeling and simulation

    Srinivasa Rao, M.; Naikan, V. N. A.


    Repairable standby system's study and analysis is an important topic in reliability. Analytical techniques become very complicated and unrealistic especially for modern complex systems. There have been attempts in the literature to evolve more realistic techniques using simulation approach for reliability analysis of systems. This paper proposes a hybrid approach called as Markov system dynamics (MSD) approach which combines the Markov approach with system dynamics simulation approach for reliability analysis and to study the dynamic behavior of systems. This approach will have the advantages of both Markov as well as system dynamics methodologies. The proposed framework is illustrated for a standby system with repair. The results of the simulation when compared with that obtained by traditional Markov analysis clearly validate the MSD approach as an alternative approach for reliability analysis.


    Yao Chengyu; Zhao Jingyi


    To overcome the design limitations of traditional hydraulic control system for synthetic rubber press and such faults as high fault rate, low reliability, high energy-consuming and which always led to shutting down of post-treatment product line for synthetic rubber, brand-new hydraulic system combining with PC control and two-way cartridge valves for the press is developed, whose reliability is analyzed, reliability model of the hydraulic system for the press is established by analyzing processing steps, and reliability simulation of each step and the whole system is carried out by software MATLAB, which is verified through reliability test. The fixed time test has proved not that theory analysis is sound, but the system has characteristics of reasonable design and high reliability,and can lower the required power supply and operational energy cost.

  11. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    Yixiong Feng


    Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.

  12. Reactor scram experience for shutdown system reliability analysis. [BWR; PWR

    Edison, G.E.; Pugliese, S.L.; Sacramo, R.F.


    Scram experience in a number of operating light water reactors has been reviewed. The date and reactor power of each scram was compiled from monthly operating reports and personal communications with the operating plant personnel. The average scram frequency from ''significant'' power (defined as P/sub trip//P/sub max greater than/ approximately 20 percent) was determined as a function of operating life. This relationship was then used to estimate the total number of reactor trips from above approximately 20 percent of full power expected to occur during the life of a nuclear power plant. The shape of the scram frequency vs. operating life curve resembles a typical reliability bathtub curve (failure rate vs. time), but without a rising ''wearout'' phase due to the lack of operating data near the end of plant design life. For this case the failures are represented by ''bugs'' in the plant system design, construction, and operation which lead to scram. The number of scrams would appear to level out at an average of around three per year; the standard deviations from the mean value indicate an uncertainty of about 50 percent. The total number of scrams from significant power that could be expected in a plant designed for a 40-year life would be about 130 if no wearout phase develops near the end of life.

  13. Intraobserver and intermethod reliability for using two different computer programs in preoperative lower limb alignment analysis

    Mohamed Kenawey


    Conclusion: Computer assisted lower limb alignment analysis is reliable whether using graphics editing program or specialized planning software. However slight higher variability for angles away from the knee joint can be expected.

  14. Reliability of 3D upper limb motion analysis in children with obstetric brachial plexus palsy.

    Mahon, Judy; Malone, Ailish; Kiernan, Damien; Meldrum, Dara


    Kinematics, measured by 3D upper limb motion analysis (3D-ULMA), can potentially increase understanding of movement patterns by quantifying individual joint contributions. Reliability in children with obstetric brachial plexus palsy (OBPP) has not been established.

  15. Analysis methods for structure reliability of piping components

    Schimpfke, T.; Grebner, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Germany)


    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  16. Research on the Strategy for the Development of University ’s Sci-tech Novelty Search Institution Based on the SWOT Analysis---Taking the Sci-tech Novelty Search Workstation of Central South University as an Example%基于SWOT分析的高校科技查新机构发展策略探讨--以中南大学科技查新工作站(Z11)为例



    This paper analyzes the strengths, weaknesses, opportunities and threats of Central South University’s Sci-tech Novelty Search Workstation by using SWOT analysis, and based on this, probes into its development strategies.%利用SWOT分析法对中南大学科技查新站的优势、劣势、机会及威胁进行了分析,并在此基础上探讨了其发展策略。

  17. Use of Fault Tree Analysis for Automotive Reliability and Safety Analysis

    Lambert, H


    Fault tree analysis (FTA) evolved from the aerospace industry in the 1960's. A fault tree is deductive logic model that is generated with a top undesired event in mind. FTA answers the question, ''how can something occur?'' as opposed to failure modes and effects analysis (FMEA) that is inductive and answers the question, ''what if?'' FTA is used in risk, reliability and safety assessments. FTA is currently being used by several industries such as nuclear power and chemical processing. Typically the automotive industries uses failure modes and effects analysis (FMEA) such as design FMEAs and process FMEAs. The use of FTA has spread to the automotive industry. This paper discusses the use of FTA for automotive applications. With the addition automotive electronics for various applications in systems such as engine/power control, cruise control and braking/traction, FTA is well suited to address failure modes within these systems. FTA can determine the importance of these failure modes from various perspectives such as cost, reliability and safety. A fault tree analysis of a car starting system is presented as an example.

  18. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Shirley, Rachel Elizabeth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States)


    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  19. Uncertainty analysis with reliability techniques of fluvial hydraulic simulations

    Oubennaceur, K.; Chokmani, K.; Nastev, M.


    Flood inundation models are commonly used to simulate hydraulic and floodplain inundation processes, prerequisite to successful floodplain management and preparation of appropriate flood risk mitigation plans. Selecting statistically significant ranges of the variables involved in the inundation modelling is crucial for the model performance. This involves various levels of uncertainty, which due to the cumulative nature can lead to considerable uncertainty in the final results. Therefore, in addition to the validation of the model results, there is a need for clear understanding and identifying sources of uncertainty and for measuring the model uncertainty. A reliability approach called Point-Estimate Method is presented to quantify uncertainty effects of the input data and to calculate the propagation of uncertainty in the inundation modelling process. The Point Estimate Method is a special case of numerical quadrature based on orthogonal polynomials. It allows to evaluate the low order of performance functions of independent random variables such the water depth. The variables considered in the analyses include elevation data, flow rate and Manning's coefficient n given with their own probability distributions. The approach is applied to a 45 km reach of the Richelieu River, Canada, between Rouses point and Fryers Rapids. The finite element hydrodynamic model H2D2 was used to solve the shallow water equations (SWE) and provide maps of expected water depths associated spatial distributions of standard deviations as a measure of uncertainty. The results indicate that for the simulated flow rates of 1113, 1206, and 1282, the uncertainties in water depths have a range of 25 cm, 30cm, and 60 cm, respectively. This kind of information is useful information for decision-making framework risk management in the context flood risk assessment.

  20. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    Nikulin, M; Mesbah, M; Limnios, N


    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  1. Modelling of Energy Expenditure at Welding Workstations: Effect of ...

    Particular emphasis is placed on the effect of temperature on work performance. The important principle of conduction is applied through the human flesh that experiences temperature changes at the welding workstation.

  2. Motivating ergonomic computer workstation setup: sometimes training is not enough.

    Sigurdsson, Sigurdur O; Artnak, Melissa; Needham, Mick; Wirth, Oliver; Silverman, Kenneth


    Musculoskeletal disorders lead to pain and suffering and result in high costs to industry. There is evidence to suggest that whereas conventional ergonomics training programs result in knowledge gains, they may not necessarily translate to changes in behavior. There were 11 participants in an ergonomics training program, and a subsample of participants received a motivational intervention in the form of incentives for correct workstation setup. Training did not yield any changes in ergonomics measures for any participant. Incentives resulted in marked and durable changes in targeted workstation measures. The data suggest that improving worker knowledge about ergonomically correct workstation setup does not necessarily lead to correct workstation setup, and that motivational interventions may be needed to achieve lasting behavior change.

  3. Ergonomic Aspects And Health Hazards On Computer Workstations ...

    Objectives: To determine the prevalence of self assessment of physical ... working hours in front of computer more than 5 hours are the significant risk factors for ... practices towards modification and adaptation of their workstations regarding ...

  4. Reliability analysis of a gravity-based foundation for wind turbines

    Vahdatirad, Mohammad Javad; Griffiths, D. V.; Andersen, Lars Vabbersgaard


    Deterministic code-based designs proposed for wind turbine foundations, are typically biased on the conservative side, and overestimate the probability of failure which can lead to higher than necessary construction cost. In this study reliability analysis of a gravity-based foundation concerning...... technique to perform the reliability analysis. The calibrated code-based design approach leads to savings of up to 20% in the concrete foundation volume, depending on the target annual reliability level. The study can form the basis for future optimization on deterministic-based designs for wind turbine...... foundations....

  5. Task analysis and computer aid development for human reliability analysis in nuclear power plants

    Yoon, W. C.; Kim, H.; Park, H. S.; Choi, H. H.; Moon, J. M.; Heo, J. Y.; Ham, D. H.; Lee, K. K.; Han, B. T. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)


    Importance of human reliability analysis (HRA) that predicts the error's occurrence possibility in a quantitative and qualitative manners is gradually increased by human errors' effects on the system's safety. HRA needs a task analysis as a virtue step, but extant task analysis techniques have the problem that a collection of information about the situation, which the human error occurs, depends entirely on HRA analyzers. The problem makes results of the task analysis inconsistent and unreliable. To complement such problem, KAERI developed the structural information analysis (SIA) that helps to analyze task's structure and situations systematically. In this study, the SIA method was evaluated by HRA experts, and a prototype computerized supporting system named CASIA (Computer Aid for SIA) was developed for the purpose of supporting to perform HRA using the SIA method. Additionally, through applying the SIA method to emergency operating procedures, we derived generic task types used in emergency and accumulated the analysis results in the database of the CASIA. The CASIA is expected to help HRA analyzers perform the analysis more easily and consistently. If more analyses will be performed and more data will be accumulated to the CASIA's database, HRA analyzers can share freely and spread smoothly his or her analysis experiences, and there by the quality of the HRA analysis will be improved. 35 refs., 38 figs., 25 tabs. (Author)

  6. A simultaneous 2D/3D autostereo workstation

    Chau, Dennis; McGinnis, Bradley; Talandis, Jonas; Leigh, Jason; Peterka, Tom; Knoll, Aaron; Sumer, Aslihan; Papka, Michael; Jellinek, Julius


    We present a novel immersive workstation environment that scientists can use for 3D data exploration and as their everyday 2D computer monitor. Our implementation is based on an autostereoscopic dynamic parallax barrier 2D/3D display, interactive input devices, and a software infrastructure that allows client/server software modules to couple the workstation to scientists' visualization applications. This paper describes the hardware construction and calibration, software components, and a demonstration of our system in nanoscale materials science exploration.

  7. C.A.D. and ergonomic workstations conception

    Keravel, Francine


    Computer Aided Design is able to perform workstation's conception. An ergonomic data could be complete this view and warrant a coherent fiability conception. Complexe form representation machines, anthropometric data and environment factors are allowed to perceive the limit points between humain and new technology situation. Work ability users, safety, confort and human efficiency could be also included. Such a programm with expert system integration will give a complete listing appreciation about workstation's conception.

  8. A Review: Passive System Reliability Analysis – Accomplishments and Unresolved Issues



    Full Text Available Reliability assessment of passive safety systems is one of the important issues, since safety of advanced nuclear reactors rely on several passive features. In this context, a few methodologies such as Reliability Evaluation of Passive Safety System (REPAS, Reliability Methods for Passive Safety Functions (RMPS and Analysis of Passive Systems ReliAbility (APSRA have been developed in the past. These methodologies have been used to assess reliability of various passive safety systems. While these methodologies have certain features in common, but they differ in considering certain issues; for example, treatment of model uncertainties, deviation of geometric and process parameters from their nominal values, etc. This paper presents the state of the art on passive system reliability assessment methodologies, the accomplishments and remaining issues. In this review three critical issues pertaining to passive systems performance and reliability have been identified. The first issue is, applicability of best estimate codes and model uncertainty. The best estimate codes based phenomenological simulations of natural convection passive systems could have significant amount of uncertainties, these uncertainties must be incorporated in appropriate manner in the performance and reliability analysis of such systems. The second issue is the treatment of dynamic failure characteristics of components of passive systems. REPAS, RMPS and APSRA methodologies do not consider dynamic failures of components or process, which may have strong influence on the failure of passive systems. The influence of dynamic failure characteristics of components on system failure probability is presented with the help of a dynamic reliability methodology based on Monte Carlo simulation. The analysis of a benchmark problem of Hold-up tank shows the error in failure probability estimation by not considering the dynamism of components. It is thus suggested that dynamic reliability

  9. Stochastic Response and Reliability Analysis of Hysteretic Structures

    Mørk, Kim Jørgensen

    During the last 30 years response analysis of structures under random excitation has been studied in detail. These studies are motivated by the fact that most of natures excitations, such as earthquakes, wind and wave loads exhibit randomly fluctuating characters. For safety reasons this randomness...

  10. reliability analysis of a two span floor designed according to ...


    The Structural analysis and design of the timber floor system was carried out using deterministic approach ... The cell structure of hardwoods is more complex than ..... [12] BS EN -1-1: Eurocode 5: Design of Timber Structures, Part. 1-1.

  11. Reliability analysis of the control system of large-scale vertical mixing equipment


    The control system of vertical mixing equipment is a concentrate distributed monitoring system (CDMS).A reliability analysis model was built and its analysis was conducted based on reliability modeling theories such as the graph theory,Markov process,and redundancy theory.Analysis and operational results show that the control system can meet all technical requirements for high energy composite solid propellant manufacturing.The reliability performance of the control system can be considerably improved by adopting a control strategy combined with the hot spared redundancy of the primary system and the cold spared redundancy of the emergent one.The reliability performance of the control system can be also improved by adopting the redundancy strategy or improving the quality of each component and cable of the system.

  12. Structured information analysis for human reliability analysis of emergency tasks in nuclear power plants

    Jung, Won Dea; Kim, Jae Whan; Park, Jin Kyun; Ha, Jae Joo [Korea Atomic Energy Research Institute, Taejeon (Korea)


    More than twenty HRA (Human Reliability Analysis) methodologies have been developed and used for the safety analysis in nuclear field during the past two decades. However, no methodology appears to have universally been accepted, as various limitations have been raised for more widely used ones. One of the most important limitations of conventional HRA is insufficient analysis of the task structure and problem space. To resolve this problem, we suggest SIA (Structured Information Analysis) for HRA. The proposed SIA consists of three parts. The first part is the scenario analysis that investigates the contextual information related to the given task on the basis of selected scenarios. The second is the goals-means analysis to define the relations between the cognitive goal and task steps. The third is the cognitive function analysis module that identifies the cognitive patterns and information flows involved in the task. Through the three-part analysis, systematic investigation is made possible from the macroscopic information on the tasks to the microscopic information on the specific cognitive processes. It is expected that analysts can attain a structured set of information that helps to predict the types and possibility of human error in the given task. 48 refs., 12 figs., 11 tabs. (Author)

  13. Structured information analysis for human reliability analysis of emergency tasks in nuclear power plants

    Jung, Won Dea; Kim, Jae Whan; Park, Jin Kyun; Ha, Jae Joo [Korea Atomic Energy Research Institute, Taejeon (Korea)


    More than twenty HRA (Human Reliability Analysis) methodologies have been developed and used for the safety analysis in nuclear field during the past two decades. However, no methodology appears to have universally been accepted, as various limitations have been raised for more widely used ones. One of the most important limitations of conventional HRA is insufficient analysis of the task structure and problem space. To resolve this problem, we suggest SIA (Structured Information Analysis) for HRA. The proposed SIA consists of three parts. The first part is the scenario analysis that investigates the contextual information related to the given task on the basis of selected scenarios. The second is the goals-means analysis to define the relations between the cognitive goal and task steps. The third is the cognitive function analysis module that identifies the cognitive patterns and information flows involved in the task. Through the three-part analysis, systematic investigation is made possible from the macroscopic information on the tasks to the microscopic information on the specific cognitive processes. It is expected that analysts can attain a structured set of information that helps to predict the types and possibility of human error in the given task. 48 refs., 12 figs., 11 tabs. (Author)

  14. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Xin-Jia Meng


    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  15. Automated migration analysis based on cell texture: method & reliability

    Chittenden Thomas W


    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  16. Sensitivity analysis for reliable design verification of nuclear turbosets

    Zentner, Irmela, E-mail: irmela.zentner@edf.f [Lamsid-Laboratory for Mechanics of Aging Industrial Structures, UMR CNRS/EDF, 1, avenue Du General de Gaulle, 92141 Clamart (France); EDF R and D-Structural Mechanics and Acoustics Department, 1, avenue Du General de Gaulle, 92141 Clamart (France); Tarantola, Stefano [Joint Research Centre of the European Commission-Institute for Protection and Security of the Citizen, T.P. 361, 21027 Ispra (Italy); Rocquigny, E. de [Ecole Centrale Paris-Applied Mathematics and Systems Department (MAS), Grande Voie des Vignes, 92 295 Chatenay-Malabry (France)


    In this paper, we present an application of sensitivity analysis for design verification of nuclear turbosets. Before the acquisition of a turbogenerator, energy power operators perform independent design assessment in order to assure safe operating conditions of the new machine in its environment. Variables of interest are related to the vibration behaviour of the machine: its eigenfrequencies and dynamic sensitivity to unbalance. In the framework of design verification, epistemic uncertainties are preponderant. This lack of knowledge is due to inexistent or imprecise information about the design as well as to interaction of the rotating machinery with supporting and sub-structures. Sensitivity analysis enables the analyst to rank sources of uncertainty with respect to their importance and, possibly, to screen out insignificant sources of uncertainty. Further studies, if necessary, can then focus on predominant parameters. In particular, the constructor can be asked for detailed information only about the most significant parameters.

  17. A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation

    U. Ayala


    Full Text Available Interruptions in cardiopulmonary resuscitation (CPR compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies.

  18. A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation

    Ayala, U.; Irusta, U.; Ruiz, J.; Eftestøl, T.; Kramer-Johansen, J.; Alonso-Atienza, F.; Alonso, E.; González-Otero, D.


    Interruptions in cardiopulmonary resuscitation (CPR) compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA) designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies. PMID:24895621

  19. Probability maps as a measure of reliability for indivisibility analysis

    Joksić Dušan


    Full Text Available Digital terrain models (DTMs represent segments of spatial data bases related to presentation of terrain features and landforms. Square grid elevation models (DEMs have emerged as the most widely used structure during the past decade because of their simplicity and simple computer implementation. They have become an important segment of Topographic Information Systems (TIS, storing natural and artificial landscape in forms of digital models. This kind of a data structure is especially suitable for morph metric terrain evaluation and analysis, which is very important in environmental and urban planning and Earth surface modeling applications. One of the most often used functionalities of Geographical information systems software packages is indivisibility or view shed analysis of terrain. Indivisibility determination from analog topographic maps may be very exhausting, because of the large number of profiles that have to be extracted and compared. Terrain representation in form of the DEMs databases facilitates this task. This paper describes simple algorithm for terrain view shed analysis by using DEMs database structures, taking into consideration the influence of uncertainties of such data to the results obtained thus far. The concept of probability maps is introduced as a mean for evaluation of results, and is presented as thematic display.

  20. Finite State Machine Based Evaluation Model for Web Service Reliability Analysis

    M, Thirumaran; Abarna, S; P, Lakshmi


    Now-a-days they are very much considering about the changes to be done at shorter time since the reaction time needs are decreasing every moment. Business Logic Evaluation Model (BLEM) are the proposed solution targeting business logic automation and facilitating business experts to write sophisticated business rules and complex calculations without costly custom programming. BLEM is powerful enough to handle service manageability issues by analyzing and evaluating the computability and traceability and other criteria of modified business logic at run time. The web service and QOS grows expensively based on the reliability of the service. Hence the service provider of today things that reliability is the major factor and any problem in the reliability of the service should overcome then and there in order to achieve the expected level of reliability. In our paper we propose business logic evaluation model for web service reliability analysis using Finite State Machine (FSM) where FSM will be extended to analy...

  1. Reliability analysis and risk-based methods for planning of operation & maintenance of offshore wind turbines

    Sørensen, John Dalsgaard


    for extreme and fatigue limit states are presented. Operation & Maintenance planning often follows corrective and preventive strategies based on information from condition monitoring and structural health monitoring systems. A reliability- and riskbased approach is presented where a life-cycle approach......Reliability analysis and probabilistic models for wind turbines are considered with special focus on structural components and application for reliability-based calibration of partial safety factors. The main design load cases to be considered in design of wind turbine components are presented...... including the effects of the control system and possible faults due to failure of electrical / mechanical components. Considerations are presented on the target reliability level for wind turbine structural components. Application is shown for reliability-based calibrations of partial safety factors...

  2. Reliability analysis of M/G/1 queues with general retrial times and server breakdowns

    WANG Jinting


    This paper concerns the reliability issues as well as queueing analysis of M/G/1 retrial queues with general retrial times and server subject to breakdowns and repairs. We assume that the server is unreliable and customers who find the server busy or down are queued in the retrial orbit in accordance with a first-come-first-served discipline. Only the customer at the head of the orbit queue is allowed for access to the server. The necessary and sufficient condition for the system to be stable is given. Using a supplementary variable method, we obtain the Laplace-Stieltjes transform of the reliability function of the server and a steady state solution for both queueing and reliability measures of interest. Some main reliability indexes, such as the availability, failure frequency, and the reliability function of the server, are obtained.

  3. Fatigue damage reliability analysis for Nanjing Yangtze river bridge using structural health monitoring data

    HE Xu-hui; CHEN Zheng-qing; YU Zhi-wu; HUANG Fang-lin


    To evaluate the fatigue damage reliability of critical members of the Nanjing Yangtze river bridge, according to the stress-number curve and Miner's rule, the corresponding expressions for calculating the structural fatigue damage reliability were derived. Fatigue damage reliability analysis of some critical members of the Nanjing Yangtze river bridge was carried out by using the strain-time histories measured by the structural health monitoring system of the bridge. The corresponding stress spectra were obtained by the real-time rain-flow counting method.Results of fatigue damage were calculated respectively by the reliability method at different reliability and compared with Miner's rule. The results show that the fatigue damage of critical members of the Nanjing Yangtze river bridge is very small due to its low live-load stress level.

  4. Cyber-workstation for computational neuroscience

    Jack DiGiovanna


    Full Text Available A Cyber-Workstation (CW to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists develop new models and integrate them with existing models (e.g. recursive least-squares regressor by specifying appropriate connection in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.

  5. Strategy for Synthesis of Flexible Heat Exchanger Networks Embedded with System Reliability Analysis

    YI Dake; HAN Zhizhong; WANG Kefeng; YAO Pingjing


    System reliability can produce a strong influence on the performance of the heat exchanger network (HEN).In this paper,an optimization method with system reliability analysis for flexible HEN by genetic/simulated annealing algorithms (GA/SA) is presented.Initial flexible arrangements of HEN is received by pseudo-temperature enthalpy diagram.For determining system reliability of HEN,the connections of heat exchangers(HEXs) and independent subsystems in the HEN are analyzed by the connection sequence matrix(CSM),and the system reliability is measured by the independent subsystem including maximum number of HEXs in the HEN.As for the HEN that did not meet system reliability,HEN decoupling is applied and the independent subsystems in the HEN are changed by removing decoupling HEX,and thus the system reliability is elevated.After that,heat duty redistribution based on the relevant elements of the heat load loops and HEX areas are optimized in GA/SA.Then,the favorable network configuration,which matches both the most economical cost and system reliability criterion,is located.Moreover,particular features belonging to suitable decoupling HEX are extracted from calculations.Corresponding numerical example is presented to verify that the proposed strategy is effective to formulate optimal flexible HEN with system reliability measurement.

  6. The Revised Child Anxiety and Depression Scale: A systematic review and reliability generalization meta-analysis.

    Piqueras, Jose A; Martín-Vivar, María; Sandin, Bonifacio; San Luis, Concepción; Pineda, David


    Anxiety and depression are among the most common mental disorders during childhood and adolescence. Among the instruments for the brief screening assessment of symptoms of anxiety and depression, the Revised Child Anxiety and Depression Scale (RCADS) is one of the more widely used. Previous studies have demonstrated the reliability of the RCADS for different assessment settings and different versions. The aims of this study were to examine the mean reliability of the RCADS and the influence of the moderators on the RCADS reliability. We searched in EBSCO, PsycINFO, Google Scholar, Web of Science, and NCBI databases and other articles manually from lists of references of extracted articles. A total of 146 studies were included in our meta-analysis. The RCADS showed robust internal consistency reliability in different assessment settings, countries, and languages. We only found that reliability of the RCADS was significantly moderated by the version of RCADS. However, these differences in reliability between different versions of the RCADS were slight and can be due to the number of items. We did not examine factor structure, factorial invariance across gender, age, or country, and test-retest reliability of the RCADS. The RCADS is a reliable instrument for cross-cultural use, with the advantage of providing more information with a low number of items in the assessment of both anxiety and depression symptoms in children and adolescents. Copyright © 2017. Published by Elsevier B.V.

  7. A Reliability-Based Analysis of Bicyclist Red-Light Running Behavior at Urban Intersections

    Mei Huan


    Full Text Available This paper describes the red-light running behavior of bicyclists at urban intersections based on reliability analysis approach. Bicyclists’ crossing behavior was collected by video recording. Four proportional hazard models by the Cox, exponential, Weibull, and Gompertz distributions were proposed to analyze the covariate effects on safety crossing reliability. The influential variables include personal characteristics, movement information, and situation factors. The results indicate that the Cox hazard model gives the best description of bicyclists’ red-light running behavior. Bicyclists’ safety crossing reliabilities decrease as their waiting times increase. There are about 15.5% of bicyclists with negligible waiting times, who are at high risk of red-light running and very low safety crossing reliabilities. The proposed reliability models can capture the covariates’ effects on bicyclists’ crossing behavior at signalized intersections. Both personal characteristics and traffic conditions have significant effects on bicyclists’ safety crossing reliability. A bicyclist is more likely to have low safety crossing reliability and high violation risk when more riders are crossing against the red light, and they wait closer to the motorized lane. These findings provide valuable insights in understanding bicyclists’ violation behavior; and their implications in assessing bicyclists’ safety crossing reliability were discussed.

  8. Report on the analysis of field data relating to the reliability of solar hot water systems.

    Menicucci, David F. (Building Specialists, Inc., Albuquerque, NM)


    Utilities are overseeing the installations of thousand of solar hot water (SHW) systems. Utility planners have begun to ask for quantitative measures of the expected lifetimes of these systems so that they can properly forecast their loads. This report, which augments a 2009 reliability analysis effort by Sandia National Laboratories (SNL), addresses this need. Additional reliability data have been collected, added to the existing database, and analyzed. The results are presented. Additionally, formal reliability theory is described, including the bathtub curve, which is the most common model to characterize the lifetime reliability character of systems, and for predicting failures in the field. Reliability theory is used to assess the SNL reliability database. This assessment shows that the database is heavily weighted with data that describe the reliability of SHW systems early in their lives, during the warranty period. But it contains few measured data to describe the ends of SHW systems lives. End-of-life data are the most critical ones to define sufficiently the reliability of SHW systems in order to answer the questions that the utilities pose. Several ideas are presented for collecting the required data, including photometric analysis of aerial photographs of installed collectors, statistical and neural network analysis of energy bills from solar homes, and the development of simple algorithms to allow conventional SHW controllers to announce system failures and record the details of the event, similar to how aircraft black box recorders perform. Some information is also presented about public expectations for the longevity of a SHW system, information that is useful in developing reliability goals.

  9. Reliability and life-cycle analysis of deteriorating systems

    Sánchez-Silva, Mauricio


    This book compiles and critically discusses modern engineering system degradation models and their impact on engineering decisions. In particular, the authors focus on modeling the uncertain nature of degradation considering both conceptual discussions and formal mathematical formulations. It also describes the basics concepts and the various modeling aspects of life-cycle analysis (LCA).  It highlights the role of degradation in LCA and defines optimum design and operation parameters. Given the relationship between operational decisions and the performance of the system’s condition over time, maintenance models are also discussed. The concepts and models presented have applications in a large variety of engineering fields such as Civil, Environmental, Industrial, Electrical and Mechanical engineering. However, special emphasis is given to problems related to large infrastructure systems. The book is intended to be used both as a reference resource for researchers and practitioners and as an academic text ...

  10. Reliability Analysis of Repairable Systems Using Stochastic Point Processes

    TAN Fu-rong; JIANG Zhi-bin; BAI Tong-shuo


    In order to analyze the failure data from repairable systems, the homogeneous Poisson process(HPP) is usually used. In general, HPP cannot be applied to analyze the entire life cycle of a complex, re-pairable system because the rate of occurrence of failures (ROCOF) of the system changes over time rather thanremains stable. However, from a practical point of view, it is always preferred to apply the simplest methodto address problems and to obtain useful practical results. Therefore, we attempted to use the HPP model toanalyze the failure data from real repairable systems. A graphic method and the Laplace test were also usedin the analysis. Results of numerical applications show that the HPP model may be a useful tool for the entirelife cycle of repairable systems.

  11. Mechanical system reliability analysis using a combination of graph theory and Boolean function

    Tang, J


    A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis.

  12. Analysis and Application of Mechanical System Reliability Model Based on Copula Function

    An Hai


    Full Text Available There is complicated correlations in mechanical system. By using the advantages of copula function to solve the related issues, this paper proposes the mechanical system reliability model based on copula function. And makes a detailed research for the serial and parallel mechanical system model and gets their reliability function respectively. Finally, the application research is carried out for serial mechanical system reliability model to prove its validity by example. Using Copula theory to make mechanical system reliability modeling and its expectation, studying the distribution of the random variables (marginal distribution of the mechanical product’ life and associated structure of variables separately, can reduce the difficulty of multivariate probabilistic modeling and analysis to make the modeling and analysis process more clearly.

  13. Technology development of maintenance optimization and reliability analysis for safety features in nuclear power plants

    Kim, Tae Woon; Choi, Seong Soo; Lee, Dong Gue; Kim, Young Il


    The reliability data management system (RDMS) for safety systems of PHWR type plants has been developed and utilized in the reliability analysis of the special safety systems of Wolsong Unit 1,2 with plant overhaul period lengthened. The RDMS is developed for the periodic efficient reliability analysis of the safety systems of Wolsong Unit 1,2. In addition, this system provides the function of analyzing the effects on safety system unavailability if the test period of a test procedure changes as well as the function of optimizing the test periods of safety-related test procedures. The RDMS can be utilized in handling the requests of the regulatory institute actively with regard to the reliability validation of safety systems. (author)

  14. Methodological Approach for Performing Human Reliability and Error Analysis in Railway Transportation System

    Fabio De Felice


    Full Text Available Today, billions of dollars are being spent annually world wide to develop, manufacture, and operate transportation system such trains, ships, aircraft, and motor vehicles. Around 70 to 90 percent oftransportation crashes are, directly or indirectly, the result of human error. In fact, with the development of technology, system reliability has increased dramatically during the past decades, while human reliability has remained unchanged over the same period. Accordingly, human error is now considered as the most significant source of accidents or incidents in safety-critical systems. The aim of the paper is the proposal of a methodological approach to improve the transportation system reliability and in particular railway transportation system. The methodology presented is based on Failure Modes, Effects and Criticality Analysis (FMECA and Human Reliability Analysis (HRA.

  15. Tensile reliability analysis for gravity dam foundation surface based on FEM and response surface method

    Tong-chun LI; Dan-dan LI; Zhi-qiang WANG


    In this study,the limit state equation for tensile reliability analysis of the foundation surface of a gravity dam was established.The possible crack length was set as the action effect and allowable crack length was set as the resistance in the limit state.The nonlinear FEM was used to obtain the crack length of the foundation surface of the gravity dam,and the linear response surface method based on the orthogonal test design method was used to calculate the reliability,providing a reasonable and simple method for calculating the reliability of the serviceability limit state.The Longtan RCC gravity dam was chosen as an example.An orthogonal test,including eleven factors and two levels,was conducted,and the tensile reliability was calculated.The analysis shows that this method is reasonable.

  16. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    Duffy, Stephen F.; Palko, Joseph L.


    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  17. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    XI Jia-mi; YANG Geng-she


    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  18. Latency Analysis of Systems with Multiple Interfaces for Ultra-Reliable M2M Communication

    Nielsen, Jimmy Jessen; Popovski, Petar


    One of the ways to satisfy the requirements of ultra-reliable low latency communication for mission critical Machine-type Communications (MTC) applications is to integrate multiple communication interfaces. In order to estimate the performance in terms of latency and reliability...... of such an integrated communication system, we propose an analysis framework that combines traditional reliability models with technology-specific latency probability distributions. In our proposed model we demonstrate how failure correlation between technologies can be taken into account. We show for the considered...

  19. Reliability and error analysis on xenon/CT CBF

    Zhang, Z. [Diversified Diagnostic Products, Inc., Houston, TX (United States)


    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  20. Reliability of Foundation Pile Based on Settlement and a Parameter Sensitivity Analysis

    Shujun Zhang; Luo Zhong; Zhijun Xu


    Based on the uncertainty analysis to calculation model of settlement, the formula of reliability index of foundation pile is derived. Based on this formula, the influence of coefficient of variation of the calculated settlement at pile head, coefficient of variation of the permissible limit of the settlement, coefficient of variation of the measured settlement, safety coefficient, and the mean value of calculation model coefficient on reliability is analyzed. The results indicate that (1) hig...

  1. Investigation of Common Symptoms of Cancer and Reliability Analysis


    Objective: To identify cancer distribution and treatment requirements, a questionnaire on cancer patients was conducted. It was our objective to validate a series of symptoms commonly used in traditional Chinese medicine (TCM). Methods: The M. D. Anderson Symptom Assessment Inventory (MDASI) was used with 10 more TCM items added. Questions regarding TCM application requested in cancer care were also asked. A multi-center, cross-sectional study was conducted in 340 patients from 4 hospitals in Beijing and Dalian. SPSS and Excel software were adopted for statistical analysis. The questionnaire was self-evaluated with the Cronbach's alpha score. Results: The most common symptoms were fatigue 89.4%, sleep disturbance 74.4%, dry mouth 72.9%, poor appetite 72.9%, and difficulty remembering 71.2%. These symptoms affected work (89.8%), mood (82.6%),and activity (76.8%), resulting in poor quality of life. Eighty percent of the patients wanted to regulate the body with TCM. Almost 100% of the patients were interested in acquiring knowledge regarding the integrated traditional Chinese medicine (TCM) and Western medicine (WM) in the treatment and rehabilitation of cancer. Cronbach's alpha score indicated that there was acceptable internal consistency within both the MDASI and TCM items, 0.86 for MDASI, 0.78 for TCM, and 0.90 for MDASI-TCM (23 items). Conclusions: Fatigue, sleep disturbance, dry mouth, poor appetite, and difficulty remembering are the most common symptoms in cancer patients. These greatly affect the quality of life for these patients. Patients expressed a strong desire for TCM holistic regulation. The MDASI and its TCM-adapted model could be a critical tool for the quantitative study of TCM symptoms.

  2. Reliability and Validity of Quantitative Video Analysis of Baseball Pitching Motion.

    Oyama, Sakiko; Sosa, Araceli; Campbell, Rebekah; Correa, Alexandra


    Video recordings are used to quantitatively analyze pitchers' techniques. However, reliability and validity of such analysis is unknown. The purpose of the study was to investigate the reliability and validity of joint and segment angles identified during a pitching motion using video analysis. Thirty high school baseball pitchers participated. The pitching motion was captured using 2 high-speed video cameras and a motion capture system. Two raters reviewed the videos to digitize the body segments to calculate 2-dimensional angles. The corresponding 3-dimensional angles were calculated from the motion capture data. Intrarater reliability, interrater reliability, and validity of the 2-dimensional angles were determined. The intrarater and interrater reliability of the 2-dimensional angles were high for most variables. The trunk contralateral flexion at maximum external rotation was the only variable with high validity. Trunk contralateral flexion at ball release, trunk forward flexion at foot contact and ball release, shoulder elevation angle at foot contact, and maximum shoulder external rotation had moderate validity. Two-dimensional angles at the shoulder, elbow, and trunk could be measured with high reliability. However, the angles are not necessarily anatomically correct, and thus use of quantitative video analysis should be limited to angles that can be measured with good validity.

  3. Stochastic data-flow graph models for the reliability analysis of communication networks and computer systems

    Chen, D.J.


    The literature is abundant with combinatorial reliability analysis of communication networks and fault-tolerant computer systems. However, it is very difficult to formulate reliability indexes using combinatorial methods. These limitations have led to the development of time-dependent reliability analysis using stochastic processes. In this research, time-dependent reliability-analysis techniques using Dataflow Graphs (DGF) are developed. The chief advantages of DFG models over other models are their compactness, structural correspondence with the systems, and general amenability to direct interpretation. This makes the verification of the correspondence of the data-flow graph representation to the actual system possible. Several DGF models are developed and used to analyze the reliability of communication networks and computer systems. Specifically, Stochastic Dataflow graphs (SDFG), both the discrete-time and the continuous time models are developed and used to compute time-dependent reliability of communication networks and computer systems. The repair and coverage phenomenon of communication networks is also analyzed using SDFG models.

  4. Multiobject Reliability Analysis of Turbine Blisk with Multidiscipline under Multiphysical Field Interaction

    Chun-Yi Zhang


    Full Text Available To study accurately the influence of the deformation, stress, and strain of turbine blisk on the performance of aeroengine, the comprehensive reliability analysis of turbine blisk with multiple disciplines and multiple objects was performed based on multiple response surface method (MRSM and fluid-thermal-solid coupling technique. Firstly, the basic thought of MRSM was introduced. And then the mathematical model of MRSM was established with quadratic polynomial. Finally, the multiple reliability analyses of deformation, stress, and strain of turbine blisk were completed under multiphysical field coupling by the MRSM, and the comprehensive performance of turbine blisk was evaluated. From the reliability analysis, it is demonstrated that the reliability degrees of the deformation, stress, and strain for turbine blisk are 0.9942, 0.9935, 0.9954, and 0.9919, respectively, when the allowable deformation, stress, and strain are 3.7 × 10−3 m, 1.07 × 109 Pa, and 1.12 × 10−2 m/m, respectively; besides, the comprehensive reliability degree of turbine blisk is 0.9919, which basically satisfies the engineering requirement of aeroengine. The efforts of this paper provide a promising approach method for multidiscipline multiobject reliability analysis.

  5. Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency

    Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee


    Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

  6. Analysis of strain gage reliability in F-100 jet engine testing at NASA Lewis Research Center

    Holanda, R.


    A reliability analysis was performed on 64 strain gage systems mounted on the 3 rotor stages of the fan of a YF-100 engine. The strain gages were used in a 65 hour fan flutter research program which included about 5 hours of blade flutter. The analysis was part of a reliability improvement program. Eighty-four percent of the strain gages survived the test and performed satisfactorily. A post test analysis determined most failure causes. Five failures were caused by open circuits, three failed gages showed elevated circuit resistance, and one gage circuit was grounded. One failure was undetermined.

  7. Problems Related to Use of Some Terms in System Reliability Analysis

    Nadezda Hanusova


    Full Text Available The paper deals with problems of using dependability terms, defined in actual standard STN IEC 50 (191: International electrotechnical dictionary, chap. 191: Dependability and quality of service (1993, in a technical systems dependability analysis. The goal of the paper is to find a relation between terms introduced in the mentioned standard and used in the technical systems dependability analysis and rules and practices used in a system analysis of the system theory. Description of a part of the system life cycle related to reliability is used as a starting point. The part of a system life cycle is described by the state diagram and reliability relevant therms are assigned.

  8. Content Analysis in Mass Communication: Assessment and Reporting of Intercoder Reliability.

    Lombard, Matthew; Snyder-Duch, Jennifer; Bracken, Cheryl Campanella


    Reviews the importance of intercoder agreement for content analysis in mass communication research. Describes several indices for calculating this type of reliability (varying in appropriateness, complexity, and apparent prevalence of use). Presents a content analysis of content analyses reported in communication journals to establish how…

  9. Dynamic Scapular Movement Analysis: Is It Feasible and Reliable in Stroke Patients during Arm Elevation?

    De Baets, Liesbet; Van Deun, Sara; Desloovere, Kaat; Jaspers, Ellen


    Knowledge of three-dimensional scapular movements is essential to understand post-stroke shoulder pain. The goal of the present work is to determine the feasibility and the within and between session reliability of a movement protocol for three-dimensional scapular movement analysis in stroke patients with mild to moderate impairment, using an optoelectronic measurement system. Scapular kinematics of 10 stroke patients and 10 healthy controls was recorded on two occasions during active anteflexion and abduction from 0° to 60° and from 0° to 120°. All tasks were executed unilaterally and bilaterally. The protocol’s feasibility was first assessed, followed by within and between session reliability of scapular total range of motion (ROM), joint angles at start position and of angular waveforms. Additionally, measurement errors were calculated for all parameters. Results indicated that the protocol was generally feasible for this group of patients and assessors. Within session reliability was very good for all tasks. Between sessions, scapular angles at start position were measured reliably for most tasks, while scapular ROM was more reliable during the 120° tasks. In general, scapular angles showed higher reliability during anteflexion compared to abduction, especially for protraction. Scapular lateral rotations resulted in smallest measurement errors. This study indicates that scapular kinematics can be measured reliably and with precision within one measurement session. In case of multiple test sessions, further methodological optimization is required for this protocol to be suitable for clinical decision-making and evaluation of treatment efficacy. PMID:24244414

  10. Dynamic scapular movement analysis: is it feasible and reliable in stroke patients during arm elevation?

    Liesbet De Baets

    Full Text Available Knowledge of three-dimensional scapular movements is essential to understand post-stroke shoulder pain. The goal of the present work is to determine the feasibility and the within and between session reliability of a movement protocol for three-dimensional scapular movement analysis in stroke patients with mild to moderate impairment, using an optoelectronic measurement system. Scapular kinematics of 10 stroke patients and 10 healthy controls was recorded on two occasions during active anteflexion and abduction from 0° to 60° and from 0° to 120°. All tasks were executed unilaterally and bilaterally. The protocol's feasibility was first assessed, followed by within and between session reliability of scapular total range of motion (ROM, joint angles at start position and of angular waveforms. Additionally, measurement errors were calculated for all parameters. Results indicated that the protocol was generally feasible for this group of patients and assessors. Within session reliability was very good for all tasks. Between sessions, scapular angles at start position were measured reliably for most tasks, while scapular ROM was more reliable during the 120° tasks. In general, scapular angles showed higher reliability during anteflexion compared to abduction, especially for protraction. Scapular lateral rotations resulted in smallest measurement errors. This study indicates that scapular kinematics can be measured reliably and with precision within one measurement session. In case of multiple test sessions, further methodological optimization is required for this protocol to be suitable for clinical decision-making and evaluation of treatment efficacy.

  11. Muscle activity patterns and spinal shrinkage in office workers using a sit-stand workstation versus a sit workstation.

    Gao, Ying; Cronin, Neil J; Pesola, Arto J; Finni, Taija


    Reducing sitting time by means of sit-stand workstations is an emerging trend, but further evidence is needed regarding their health benefits. This cross-sectional study compared work time muscle activity patterns and spinal shrinkage between office workers (aged 24-62, 58.3% female) who used either a sit-stand workstation (Sit-Stand group, n = 10) or a traditional sit workstation (Sit group, n = 14) for at least the past three months. During one typical workday, muscle inactivity and activity from quadriceps and hamstrings were monitored using electromyography shorts, and spinal shrinkage was measured using stadiometry before and after the workday. Compared with the Sit group, the Sit-Stand group had less muscle inactivity time (66.2 ± 17.1% vs. 80.9 ± 6.4%, p = 0.014) and more light muscle activity time (26.1 ± 12.3% vs. 14.9 ± 6.3%, p = 0.019) with no significant difference in spinal shrinkage (5.62 ± 2.75 mm vs. 6.11 ± 2.44 mm). This study provides evidence that working with sit-stand workstations can promote more light muscle activity time and less inactivity without negative effects on spinal shrinkage. Practitioner Summary: This cross-sectional study compared the effects of using a sit-stand workstation to a sit workstation on muscle activity patterns and spinal shrinkage in office workers. It provides evidence that working with a sit-stand workstation can promote more light muscle activity time and less inactivity without negative effects on spinal shrinkage.

  12. Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis

    Zhou, Z.-G.; Tang, P.; Zhou, M.


    Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  13. Reliability reallocation models as a support tools in traffic safety analysis.

    Bačkalić, Svetlana; Jovanović, Dragan; Bačkalić, Todor


    One of the essential questions placed before a road authority is where to act first, i.e. which road sections should be treated in order to achieve the desired level of reliability of a particular road, while this is at the same time the subject of this research. The paper shows how the reliability reallocation theory can be applied in safety analysis of a road consisting of sections. The model has been successfully tested using two apportionment techniques - ARINC and the minimum effort algorithm. The given methods were applied in the traffic safety analysis as a basic step, for the purpose of achieving a higher level of reliability. The previous methods used for selecting hazardous locations do not provide precise values for the required frequency of accidents, i.e. the time period between the occurrences of two accidents. In other words, they do not allow for the establishment of a connection between a precise demand for increased reliability (expressed as a percentage) and the selection of particular road sections for further analysis. The paper shows that reallocation models can also be applied in road safety analysis, or more precisely, as part of the measures for increasing their level of safety. A tool has been developed for selecting road sections for treatment on the basis of a precisely defined increase in the level of reliability of a particular road, i.e. the mean time between the occurrences of two accidents.

  14. Reliability analysis of supporting pressure in tunnels based on three-dimensional failure mechanism

    罗卫华; 李闻韬


    Based on nonlinear failure criterion, a three-dimensional failure mechanism of the possible collapse of deep tunnel is presented with limit analysis theory. Support pressure is taken into consideration in the virtual work equation performed under the upper bound theorem. It is necessary to point out that the properties of surrounding rock mass plays a vital role in the shape of collapsing rock mass. The first order reliability method and Monte Carlo simulation method are then employed to analyze the stability of presented mechanism. Different rock parameters are considered random variables to value the corresponding reliability index with an increasing applied support pressure. The reliability indexes calculated by two methods are in good agreement. Sensitivity analysis was performed and the influence of coefficient variation of rock parameters was discussed. It is shown that the tensile strength plays a much more important role in reliability index than dimensionless parameter, and that small changes occurring in the coefficient of variation would make great influence of reliability index. Thus, significant attention should be paid to the properties of surrounding rock mass and the applied support pressure to maintain the stability of tunnel can be determined for a given reliability index.

  15. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    Zio, Enrico


    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...




    A two-point adaptive nonlinear approximation (referred to as TANA4) suitable for reliability analysis is proposed. Transformed and normalized random variables in probabilistic analysis could become negative and pose a challenge to the earlier developed two-point approximations; thus a suitable method that can address this issue is needed. In the method proposed, the nonlinearity indices of intervening variables are limited to integers. Then, on the basis of the present method, an improved sequential approximation of the limit state surface for reliability analysis is presented. With the gradient projection method, the data points for the limit state surface approximation are selected on the original limit state surface, which effectively represents the nature of the original response function. On the basis of this new approximation, the reliability is estimated using a first-order second-moment method. Various examples, including both structural and non-structural ones, are presented to show the effectiveness of the method proposed.

  17. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes


    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.

  18. Asymptotic Sampling for Reliability Analysis of Adhesive Bonded Stepped Lap Composite Joints

    Kimiaeifar, Amin; Lund, Erik; Thomsen, Ole Thybo


    Reliability analysis coupled with finite element analysis (FEA) of composite structures is computationally very demanding and requires a large number of simulations to achieve an accurate prediction of the probability of failure with a small standard error. In this paper Asymptotic Sampling, which....... Three dimensional (3D) FEA is used for the structural analysis together with a design equation that is associated with a deterministic code-based design equation where reliability is secured by partial safety factors. The Tsai-Wu and the maximum principal stress failure criteria are used to predict...... failure in the composite and adhesive layers, respectively, and the results are compared with the target reliability level implicitly used in the wind turbine standard IEC 61400-1. The accuracy and efficiency of Asymptotic Sampling is investigated by comparing the results with predictions obtained using...

  19. Structure buckling and non-probabilistic reliability analysis of supercavitating vehicles

    AN Wei-guang; ZHOU Ling; AN Hai


    To perform structure buckling and reliability analysis on supercavitating vehicles with high velocity in the submarine, supercavitating vehicles were simplified as variable cross section beam firstly. Then structural buckling analysis of supercavitating vehicles with or without engine thrust was conducted, and the structural buckling safety margin equation of supercavitating vehicles was established. The indefinite information was de-scribed by interval set and the structure reliability analysis was performed by using non-probabilistic reliability method. Considering interval variables as random variables which satisfy uniform distribution, the Monte-Carlo method was used to calculate the non-probabilistic failure degree. Numerical examples of supercavitating vehi-cles were presented. Under different ratios of base diameter to cavitator diameter, the change tendency of non-probabilistic failure degree of structural buckling of supereavitating vehicles with or without engine thrust was studied along with the variety of speed.

  20. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    Migneault, Gerard E.


    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  1. Reliability analysis of shoulder balance measures: comparison of the 4 available methods.

    Hong, Jae-Young; Suh, Seung-Woo; Yang, Jae-Hyuk; Park, Si-Young; Han, Ji-Hoon


    Observational study with 3 examiners. To compare the reliability of shoulder balance measurement methods. There are several measurement methods for shoulder balance. No reliability analysis has been performed despite the clinical importance of this measurement. Whole spine posteroanterior radiographs (n = 270) were collected to compare the reliability of the 4 shoulder balance measures in patients with adolescent idiopathic scoliosis. Each radiograph was measured twice by each of the 3 examiners using 4 measurement methods. The data were analyzed statistically to determine the inter- and intraobserver reliability. Overall, the 4 radiographical methods showed an excellent intraclass correlation coefficient regardless of severity in intraobserver comparisons (>0.904). In addition, the mean absolute difference values in all methods were low and were comparatively similar (0.445, mean absolute difference 0.810 and >0.787, respectively) regardless of severity. In addition, the mean absolute difference values in the clavicular angle method were lower (balance measurement method clinically. 3.

  2. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.


    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  3. Optimizing Workstation Design for Standing Work System in an Electronics Assembly Work

    Baba Mohd DEROS


    Full Text Available Background: Standing workstation can be a strategic approach for many electronics manufacturers to achieve work optimization. However, the well-being of the workers has become a great issue for both workers and employers. The main objective of this research was to study the effects of standing working posture on the workers and their impact to workers’ health and productivity and then to re-design and optimize their workstations to a better the working posture.Methods: The methods used in this study included ergonomics risk assessment using Standing Risk Assessment (SRA, Body Parts Symptoms Analysis (BPSA and anthropometric data measurements. The subjects in this study were 146 female workers. This case study was carried out in 2011 in a multinational electronics company situated in Beranang Industrial Area, Selangor, Malaysia.Results: After the re-design, a 26% floor space savings, as well as 30% improvement in productivity, quality and reduction in Work In Progress (WIP was seen. The risk level was at level 2, which was considerably low. Nevertheless, the calculated numbers of industrial accidents and total lost hours were reduced sharply by implementing correct standing cell operation.Conclusion: Standing while working might be the most productive posture in manufacturing and assembly work. However, it can be the opposite if the workers are exposed to musculoskeletal disorders and fatigue because of working standing for too long. Keywords: Standing work, Workstation design, Ergonomic, Standing risk assessment

  4. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru


    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  5. Reliability analysis of production ships with emphasis on load combination and ultimate strength

    Wang, Xiaozhi


    This thesis deals with ultimate strength and reliability analysis of offshore production ships, accounting for stochastic load combinations, using a typical North Sea production ship for reference. A review of methods for structural reliability analysis is presented. Probabilistic methods are established for the still water and vertical wave bending moments. Linear stress analysis of a midships transverse frame is carried out, four different finite element models are assessed. Upon verification of the general finite element code ABAQUS with a typical ship transverse girder example, for which test results are available, ultimate strength analysis of the reference transverse frame is made to obtain the ultimate load factors associated with the specified pressure loads in Det norske Veritas Classification rules for ships and rules for production vessels. Reliability analysis is performed to develop appropriate design criteria for the transverse structure. It is found that the transverse frame failure mode does not seem to contribute to the system collapse. Ultimate strength analysis of the longitudinally stiffened panels is performed, accounting for the combined biaxial and lateral loading. Reliability based design of the longitudinally stiffened bottom and deck panels is accomplished regarding the collapse mode under combined biaxial and lateral loads. 107 refs., 76 refs., 37 tabs.

  6. Assessing the Reliability of Digitalized Cephalometric Analysis in Comparison with Manual Cephalometric Analysis

    Farooq, Mohammed Umar; Khan, Mohd. Asadullah; Imran, Shahid; Qureshi, Arshad; Ahmed, Syed Afroz; Kumar, Sujan; Rahman, Mohd. Aziz Ur


    Introduction For more than seven decades orthodontist used cephalometric analysis as one of the main diagnostic tools which can be performed manually or by software. The use of computers in treatment planning is expected to avoid errors and make it less time consuming with effective evaluation and high reproducibility. Aim This study was done to evaluate and compare the accuracy and reliability of cephalometric measurements between computerized method of direct digital radiographs and conventional tracing. Materials and Methods Digital and conventional hand tracing cephalometric analysis of 50 patients were done. Thirty anatomical landmarks were defined on each radiograph by a single investi-gator, 5 skeletal analysis (Steiner, Wits, Tweeds, McNamara, Rakosi Jarabaks) and 28 variables were calculated. Results The variables showed consistency between the two methods except for 1-NA, Y-axis and interincisal angle measurements which were higher in manual tracing and higher facial axis angle in digital tracing. Conclusion Most of the commonly used measurements were accurate except some measurements between the digital tracing with FACAD® and manual methods. The advantages of digital imaging such as enhancement, transmission, archiving and low radiation dosages makes it to be preferred over conventional method in daily use. PMID:27891451




    Full Text Available Load balancing is the task of distribution of application tasks to different processors in an efficient manner to minimize program execution time. It involves assigning work to each processor proportional to its computing power, hence minimizing the idle time and at the same time overload of work to the processor. In the Network of Workstations (NOWs based system, heterogeneity exists in processors, memory and networks parameters. In this paper, we introduce a workstation priority mechanism, which allots priority and an assignment factor to each workstation based on its computing power. The basic idea is to use this priority and assignment factor in allocating processors the tasks proportional to their performance. The major advantage of this technique is that all processors finish their tasks almost at same time, so as to minimize the occurrence of rebalancing state. This technique proves to be costeffective as the communication overheads are substantially reduced.

  8. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.


    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  9. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Nikabdullah, N. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia and Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Singh, S. S. K.; Alebrahim, R.; Azizi, M. A. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); K, Elwaleed A. [Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Noorani, M. S. M. [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia (Malaysia)


    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  10. Application of FTA Method to Reliability Analysis of Vacuum Resin Shot Dosing Equipment


    Faults of vacuum resin shot dosing equipment are studied systematically and the fault tree of the system is constructed by using the fault tree analysis(FTA) method. Then the qualitative and quantitative analysis of the tree is carried out, respectively, and according to the results of the analysis, the measures to improve the system are worked out and implemented. As a result, the reliability of the equipment is enhanced greatly.

  11. Aviation Fuel System Reliability and Fail-Safety Analysis. Promising Alternative Ways for Improving the Fuel System Reliability

    I. S. Shumilov


    Full Text Available The paper deals with design requirements for an aviation fuel system (AFS, AFS basic design requirements, reliability, and design precautions to avoid AFS failure. Compares the reliability and fail-safety of AFS and aircraft hydraulic system (AHS, considers the promising alternative ways to raise reliability of fuel systems, as well as elaborates recommendations to improve reliability of the pipeline system components and pipeline systems, in general, based on the selection of design solutions.It is extremely advisable to design the AFS and AHS in accordance with Aviation Regulations АП25 and Accident Prevention Guidelines, ICAO (International Civil Aviation Association, which will reduce risk of emergency situations, and in some cases even avoid heavy disasters.ATS and AHS designs should be based on the uniform principles to ensure the highest reliability and safety. However, currently, this principle is not enough kept, and AFS looses in reliability and fail-safety as compared with AHS. When there are the examined failures (single and their combinations the guidelines to ensure the AFS efficiency should be the same as those of norm-adopted in the Regulations АП25 for AHS. This will significantly increase reliability and fail-safety of the fuel systems and aircraft flights, in general, despite a slight increase in AFS mass.The proposed improvements through the use of components redundancy of the fuel system will greatly raise reliability of the fuel system of a passenger aircraft, which will, without serious consequences for the flight, withstand up to 2 failures, its reliability and fail-safety design will be similar to those of the AHS, however, above improvement measures will lead to a slightly increasing total mass of the fuel system.It is advisable to set a second pump on the engine in parallel with the first one. It will run in case the first one fails for some reasons. The second pump, like the first pump, can be driven from the

  12. Proposed teleworking platform for workstations supporting multimedia medical applications

    Orphanos, George; Kanellopoulos, Dimitris; Prentzas, Lambros; Koubias, Stavros


    Teleworking refers to the usage of telecommunication facilities to improve human to human collaboration and enhance performance of work. This paper focuses on the way teleworking affects medicine. In particular, a teleworking platform is proposed to support multimedia medical applications embedded into RISC-based workstations. In order to support the teleworking platform, current commercially available products have to be taken into consideration and a range of new technologies need to be developed and made available. In this paper, we put emphasis on a RISC-based workstation, UNIXTM operating system, communication protocols capable to support the teleworking platform, and ISDN network capabilities.

  13. Analysis of the Kinematic Accuracy Reliability of a 3-DOF Parallel Robot Manipulator

    Guohua Cui


    Full Text Available Kinematic accuracy reliability is an important performance index in the evaluation of mechanism quality. By using a 3- DOF 3-PUU parallel robot manipulator as the research object, the position and orientation error model was derived by mapping the relation between the input and output of the mechanism. Three error sensitivity indexes that evaluate the kinematic accuracy of the parallel robot manipulator were obtained by adapting the singular value decomposition of the error translation matrix. Considering the influence of controllable and uncontrollable factors on the kinematic accuracy, the mathematical model of reliability based on random probability was employed. The measurement and calculation method for the evaluation of the mechanism’s kinematic reliability level was also provided. By analysing the mechanism’s errors and reliability, the law of surface error sensitivity for the location and structure parameters was obtained. The kinematic reliability of the parallel robot manipulator was statistically computed on the basis of the Monte Carlo simulation method. The reliability analysis of kinematic accuracy provides a theoretical basis for design optimization and error compensation.

  14. Markov Chain Modelling of Reliability Analysis and Prediction under Mixed Mode Loading

    SINGH Salvinder; ABDULLAH Shahrum; NIK MOHAMED Nik Abdullah; MOHD NOORANI Mohd Salmi


    The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

  15. Reliability Analysis of Distributed Grid-connected Photovoltaic System Monitoring Network

    Fu Zhixin


    Full Text Available A large amount of distributed grid-connected Photovoltaic systems have brought new challenges to the dispatching of power network. Real-time monitoring the PV system can efficiently help improve the ability of power network to accept and control the distributed PV systems, and thus mitigate the impulse on the power network imposed by the uncertainty of its power output. To study the reliability of distributed PV monitoring network, it is of great significance to look for a method to build a highly reliable monitoring system, analyze the weak links and key nodes of its monitoring performance in improving the performance of the monitoring network. Firstly a reliability model of PV system was constructed based on WSN technology. Then, in view of the dynamic characteristics of the network’s reliability, fault tree analysis was used to judge any possible reasons that cause the failure of the network and logical relationship between them. Finally, the reliability of the monitoring network was analyzed to figure out the weak links and key nodes. This paper provides guidance to build a stable and reliable monitoring network of a distributed PV system.

  16. Reduced Expanding Load Method for Simulation-Based Structural System Reliability Analysis

    远方; 宋丽娜; 方江生


    The current situation and difficulties of the structural system reliability analysis are mentioned. Then on the basis of Monte Carlo method and computer simulation, a new analysis method reduced expanding load method ( RELM ) is presented, which can be used to solve structural reliability problems effectively and conveniently. In this method, the uncertainties of loads, structural material properties and dimensions can be fully considered. If the statistic parameters of stochastic variables are known, by using this method, the probability of failure can be estimated rather accurately. In contrast with traditional approaches, RELM method gives a much better understanding of structural failure frequency and its reliability indexβ is more meaningful. To illustrate this new idea, a specific example is given.

  17. Vibration reliability analysis for aeroengine compressor blade based on support vector machine response surface method

    GAO Hai-feng; BAI Guang-chen


    To ameliorate reliability analysis efficiency for aeroengine components, such as compressor blade, support vector machine response surface method (SRSM) is proposed. SRSM integrates the advantages of support vector machine (SVM) and traditional response surface method (RSM), and utilizes experimental samples to construct a suitable response surface function (RSF) to replace the complicated and abstract finite element model. Moreover, the randomness of material parameters, structural dimension and operating condition are considered during extracting data so that the response surface function is more agreeable to the practical model. The results indicate that based on the same experimental data, SRSM has come closer than RSM reliability to approximating Monte Carlo method (MCM); while SRSM (17.296 s) needs far less running time than MCM (10958 s) and RSM (9840 s). Therefore, under the same simulation conditions, SRSM has the largest analysis efficiency, and can be considered a feasible and valid method to analyze structural reliability.

  18. Method and Application for Reliability Analysis of Measurement Data in Nuclear Power Plant

    Yun, Hun; Hwang, Kyeongmo; Lee, Hyoseoung [KEPCO E and C, Seoungnam (Korea, Republic of); Moon, Seungjae [Hanyang University, Seoul (Korea, Republic of)


    Pipe wall-thinning by flow-accelerated corrosion and various types of erosion is significant damage in secondary system piping of nuclear power plants(NPPs). All NPPs in Korea have management programs to ensure pipe integrity from degradation mechanisms. Ultrasonic test(UT) is widely used for pipe wall thickness measurement. Numerous UT measurements have been performed during scheduled outages. Wall-thinning rates are determined conservatively according to several evaluation methods developed by Electric Power Research Institute(EPRI). The issue of reliability caused by measurement error should be considered in the process of evaluation. The reliability analysis method was developed for single and multiple measurement data in the previous researches. This paper describes the application results of reliability analysis method to real measurement data during scheduled outage and proved its benefits.

  19. An Efficient Approach for the Reliability Analysis of Phased-Mission Systems with Dependent Failures

    Xing, Liudong; Meshkat, Leila; Donahue, Susan K.


    We consider the reliability analysis of phased-mission systems with common-cause failures in this paper. Phased-mission systems (PMS) are systems supporting missions characterized by multiple, consecutive, and nonoverlapping phases of operation. System components may be subject to different stresses as well as different reliability requirements throughout the course of the mission. As a result, component behavior and relationships may need to be modeled differently from phase to phase when performing a system-level reliability analysis. This consideration poses unique challenges to existing analysis methods. The challenges increase when common-cause failures (CCF) are incorporated in the model. CCF are multiple dependent component failures within a system that are a direct result of a shared root cause, such as sabotage, flood, earthquake, power outage, or human errors. It has been shown by many reliability studies that CCF tend to increase a system's joint failure probabilities and thus contribute significantly to the overall unreliability of systems subject to CCF.We propose a separable phase-modular approach to the reliability analysis of phased-mission systems with dependent common-cause failures as one way to meet the above challenges in an efficient and elegant manner. Our methodology is twofold: first, we separate the effects of CCF from the PMS analysis using the total probability theorem and the common-cause event space developed based on the elementary common-causes; next, we apply an efficient phase-modular approach to analyze the reliability of the PMS. The phase-modular approach employs both combinatorial binary decision diagram and Markov-chain solution methods as appropriate. We provide an example of a reliability analysis of a PMS with both static and dynamic phases as well as CCF as an illustration of our proposed approach. The example is based on information extracted from a Mars orbiter project. The reliability model for this orbiter considers

  20. Reliability Index for Reinforced Concrete Frames using Nonlinear Pushover and Dynamic Analysis

    Ahmad A. Fallah


    Full Text Available In the conventional design and analysis methods affecting parameters loads, materials' strength, etc are not set as probable variables. Safety factors in the current Codes and Standards are usually obtained on the basis of judgment and experience, which may be improper or uneconomical. In technical literature, a method based on nonlinear static analysis is suggested to set Reliability Index on strength of structural systems. In this paper, a method based on Nonlinear Dynamic analysis with rising acceleration (or Incremental Dynamic Analysis is introduced, the results of which are compared with those of the previous (Static Pushover Analysis method and two concepts namely Redundancy Strength and Redundancy Variations are proposed as an index to these impacts. The Redundancy Variation Factor and Redundancy Strength Factor indices for reinforced concrete frames with varying number of bays and stories and different ductility potentials are computed and ultimately, Reliability Index is determined using these two indices.

  1. Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report

    Authen, S.; Larsson, J. (Risk Pilot AB, Stockholm (Sweden)); Bjoerkman, K.; Holmberg, J.-E. (VTT, Helsingfors (Finland))


    Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)

  2. A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials

    Canu I, Guseva; C, Ducros; S, Ducamp; L, Delabre; S, Audignon-Durand; C, Durand; Y, Iwatsubo; D, Jezewski-Serra; Bihan O, Le; S, Malard; A, Radauceanu; M, Reynier; M, Ricaud; O, Witschger


    carbon nanotubes. Among the tasks observed there were: nanomaterial characterisation analysis (8), weighing (7), synthesis (6), functionalization (5), and transfer (5). The manipulated quantities were usually very small. After analysis of the data gathered in logbooks, 30 workstations have been classified as concerned with exposure to carbon nanotubes or TiO2. Additional tool validity as well as inter-and intra-evaluator reproducibility studies are ongoing. The first results are promising.

  3. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    Fagundo, Arturo


    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  4. Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

    Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick


    The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts

  5. A continuous-time Bayesian network reliability modeling and analysis framework

    Boudali, H.; Dugan, J.B.


    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  6. The Stress and Reliability Analysis of HTR’s Graphite Component

    Xiang Fang


    Full Text Available The high temperature gas cooled reactor (HTR is developing rapidly toward a modular, compact, and integral direction. As the main structure material, graphite plays a very important role in HTR engineering, and the reliability of graphite component has a close relationship with the integrity of reactor core. The graphite components are subjected to high temperature and fast neutron irradiation simultaneously during normal operation of the reactor. With the stress accumulation induced by high temperature and irradiation, the failure risk of graphite components increases constantly. Therefore it is necessary to study and simulate the mechanical behavior of graphite component under in-core working conditions and forecast the internal stress accumulation history and the variation of reliability. The work of this paper focuses on the mechanical analysis of pebble-bed type HTR's graphite brick. The analysis process is comprised of two procedures, stress analysis and reliability analysis. Three different creep models and two different reliability models are reviewed and taken into account in simulation. The stress and failure probability calculation results are obtained and discussed. The results gained with various models are highly consistent, and the discrepancies are acceptable.

  7. Reliability of ^1^H NMR analysis for assessment of lipid oxidation at frying temperatures

    The reliability of a method using ^1^H NMR analysis for assessment of oil oxidation at a frying temperature was examined. During heating and frying at 180 °C, changes of soybean oil signals in the ^1^H NMR spectrum including olefinic (5.16-5.30 ppm), bisallylic (2.70-2.88 ppm), and allylic (1.94-2.1...

  8. A continuous-time Bayesian network reliability modeling and analysis framework

    Boudali, H.; Dugan, J.B.


    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  9. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

    Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.


    Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

  10. Human reliability analysis of the Tehran research reactor using the SPAR-H method

    Barati Ramin


    Full Text Available The purpose of this paper is to cover human reliability analysis of the Tehran research reactor using an appropriate method for the representation of human failure probabilities. In the present work, the technique for human error rate prediction and standardized plant analysis risk-human reliability methods have been utilized to quantify different categories of human errors, applied extensively to nuclear power plants. Human reliability analysis is, indeed, an integral and significant part of probabilistic safety analysis studies, without it probabilistic safety analysis would not be a systematic and complete representation of actual plant risks. In addition, possible human errors in research reactors constitute a significant part of the associated risk of such installations and including them in a probabilistic safety analysis for such facilities is a complicated issue. Standardized plant analysis risk-human can be used to address these concerns; it is a well-documented and systematic human reliability analysis system with tables for human performance choices prepared in consultation with experts in the domain. In this method, performance shaping factors are selected via tables, human action dependencies are accounted for, and the method is well designed for the intended use. In this study, in consultations with reactor operators, human errors are identified and adequate performance shaping factors are assigned to produce proper human failure probabilities. Our importance analysis has revealed that human action contained in the possibility of an external object falling on the reactor core are the most significant human errors concerning the Tehran research reactor to be considered in reactor emergency operating procedures and operator training programs aimed at improving reactor safety.

  11. An efficient hybrid reliability analysis method with random and interval variables

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping


    Random and interval variables often coexist. Interval variables make reliability analysis much more computationally intensive. This work develops a new hybrid reliability analysis method so that the probability analysis (PA) loop and interval analysis (IA) loop are decomposed into two separate loops. An efficient PA algorithm is employed, and a new efficient IA method is developed. The new IA method consists of two stages. The first stage is for monotonic limit-state functions. If the limit-state function is not monotonic, the second stage is triggered. In the second stage, the limit-state function is sequentially approximated with a second order form, and the gradient projection method is applied to solve the extreme responses of the limit-state function with respect to the interval variables. The efficiency and accuracy of the proposed method are demonstrated by three examples.

  12. Reliability Analysis of Piezoelectric Truss Structures Under Joint Action of Electric and Mechanical Loading

    YANG Duo-he; AN Wei-guang; ZHU Rong-rong; MIAO Han


    Based on the finite element method(FEM) for the dynamical analysis of piezoelectric truss structures, the expressions of safety margins of strength fracture and damage electric field in the structure element are given considering electromechanical coupling effect under the joint action of electric and mechanical load. By importing the stochastic FEM,reliability of piezoelectric truss structures is analyzed by solving for partial derivative in the process of solving dynamical response of structure system with mode-superposition method. The influence of electromechanical coupling effect to reliability index is then analyzed through an example.

  13. Signal Quality Outage Analysis for Ultra-Reliable Communications in Cellular Networks

    Gerardino, Guillermo Andrés Pocovi; Alvarez, Beatriz Soret; Lauridsen, Mads


    , we investigate the potential of several techniques to combat these main threats. The analysis shows that traditional microscopic multiple-input multiple-output schemes with 2x2 or 4x4 antenna configurations are not enough to fulfil stringent reliability requirements. It is revealed how such antenna...... schemes must be complemented with macroscopic diversity as well as interference management techniques in order to ensure the necessary SINR outage performance. Based on the obtained performance results, it is discussed which of the feasible options fulfilling the ultra-reliable criteria are most promising...

  14. Fuzzy Fatigue Reliability Analysis of Offshore Platforms in Ice-Infested Waters

    方华灿; 段梦兰; 贾星兰; 谢彬


    The calculation of fatigue stress ranges due to random waves and ice loads on offshore structures is discussed, and the corresponding accumulative fatigue damages of the structural members are evaluated. To evaluate the fatigue damage to the structures more accurately, the Miner rule is modified considering the fuzziness of the concerned parameters, and a new model for fuzzy fatigue reliability analysis of offshore structures members is developed. Furthermore, an assessment method for predicting the dynamics of the fuzzy fatigue reliability of structural members is provided.

  15. Tensile reliability analysis for gravity dam foundation surface based on FEM and response surface method

    Tong-chun LI; Li, Dan-Dan; Wang, Zhi-Qiang


    In the paper, the limit state equation of tensile reliability of foundation base of gravity dam is established. The possible crack length is set as action effect and the allowance crack length is set as resistance in this limit state. The nonlinear FEM is applied to obtain the crack length of foundation base of gravity dam, and linear response surface method based on the orthogonal test design method is used to calculate the reliability,which offered an reasonable and simple analysis method t...


    Giovanni Francesco Spatola


    Full Text Available The use of image analysis methods has allowed us to obtain more reliable and repro-ducible immunohistochemistry (IHC results. Wider use of such approaches and sim-plification of software allowing a colorimetric study has meant that these methods are available to everyone, and made it possible to standardize the technique by a reliable systems score. Moreover, the recent introduction of multispectral image acquisition systems methods has further refined these techniques, minimizing artefacts and eas-ing the evaluation of the data by the observer.


    Dars, P.; Ternisien D'Ouville, T.; Mingam, H.; Merckel, G.


    Statistical analysis of asymmetry in LDD NMOSFETs electrical characteristics shows the influence of implantation angles on non-overlap variation observed on devices realized on a 100 mm wafer and within the wafers of a batch . The study of the consequence of this dispersion on the aging behaviour illustrates the importance of this parameter for reliability and the necessity to take it in account for accurate analysis of stress results.

  18. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.


    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  19. WorkstationJ: workstation emulation software for medical image perception and technology evaluation research

    Schartz, Kevin M.; Berbaum, Kevin S.; Caldwell, Robert T.; Madsen, Mark T.


    We developed image presentation software that mimics the functionality available in the clinic, but also records time-stamped, observer-display interactions and is readily deployable on diverse workstations making it possible to collect comparable observer data at multiple sites. Commercial image presentation software for clinical use has limited application for research on image perception, ergonomics, computer-aids and informatics because it does not collect observer responses, or other information on observer-display interactions, in real time. It is also very difficult to collect observer data from multiple institutions unless the same commercial software is available at different sites. Our software not only records observer reports of abnormalities and their locations, but also inspection time until report, inspection time for each computed radiograph and for each slice of tomographic studies, window/level, and magnification settings used by the observer. The software is a modified version of the open source ImageJ software available from the National Institutes of Health. Our software involves changes to the base code and extensive new plugin code. Our free software is currently capable of displaying computed tomography and computed radiography images. The software is packaged as Java class files and can be used on Windows, Linux, or Mac systems. By deploying our software together with experiment-specific script files that administer experimental procedures and image file handling, multi-institutional studies can be conducted that increase reader and/or case sample sizes or add experimental conditions.

  20. Stand by Me: Qualitative Insights into the Ease of Use of Adjustable Workstations

    Jonine Jancey


    Full Text Available Background: Office workers sit for more than 80% of the work day making them an important target for work site health promotion interventions to break up prolonged sitting time. Adjustable workstations are one strategy used to reduce prolonged sitting time. This study provides both an employees’ and employers’ perspective into the advantages, disadvantages, practicality and convenience of adjustable workstations and how movement in the office can be further supported by organisations. This qualitative study was part of the Uprising pilot study. Employees were from the intervention arm of a two group (intervention n = 18 and control n = 18 study. Employers were the immediate line-manager of the employee. Data were collected via employee focus groups (n = 17 and employer individual interviews (n = 12. The majority of participants were female (n = 18, had healthy weight, and had a post-graduate qualification. All focus group discussions and interviews were recorded, transcribed verbatim and the data coded according to the content. Qualitative content analysis was conducted. Results: Employee data identified four concepts: enhanced general wellbeing; workability and practicality; disadvantages of the retro-fit; and triggers to stand. Most employees (n = 12 reported enhanced general well-being, workability and practicality included less email exchange and positive interaction (n = 5, while the instability of the keyboard a commonly cited disadvantage. Triggers to stand included time and task based prompts. Employer data concepts included: general health and wellbeing; work engagement; flexibility; employee morale; and injury prevention. Over half of the employers (n = 7 emphasised back care and occupational health considerations as important, as well as increased level of staff engagement and strategies to break up prolonged periods of sitting. Discussion: The focus groups highlight the perceived general health benefits from this short

  1. Physician's Workstation as an Aid to Medical Data Capture and Display

    Dayhoff, Ruth; Kirin, Garrett; Richie, Susan; Majurski, William; Maloney, Daniel


    The Department of Veterans Affairs is developing, testing and evaluating the benefits of physicians' workstations as an aid to medical data capture in an outpatient clinic. Physicians workstations allowing a variety of data capture methods will be demonstrated.

  2. High Performance Diskless Linux Workstations in AX-Division

    Councell, E; Busby, L


    AX Division has recently installed a number of diskless Linux workstations to meet the needs of its scientific staff for classified processing. Results so far are quite positive, although problems do remain. Some unusual requirements were met using a novel, but simple, design: Each diskless client has a dedicated partition on a server disk that contains a complete Linux distribution.

  3. MDIS (medical diagnostic imaging support) workstation issues: clinical perspective

    Smith, Donald V.; Smith, Suzy; Cawthon, Michael A.


    A joint DoD effort is in the final stages of contract acquisition to achieve a ''filmless'' hospital environment in the near future. Success of implementation lays to a large degree on an effective image workstation. This paper will discuss soft copy image display (SCID) of the MDIS system including hardware and software.

  4. Providing Independent Reading Comprehension Strategy Practice through Workstations

    Young, Chase


    This article describes an action research project undertaken by a second grade teacher looking for research-based ways to increase his students' reading comprehension. He designed fifteen comprehension workstations and evaluated their effect on his second graders' reading comprehension scores as measured by district Imagination Station…

  5. Effect of One Carpet Weaving Workstation on Upper Trapezius Fatigue

    Neda Mahdavi


    Full Text Available Introduction: This study aimed to investigate the effect of carpet weaving at a proposed workstation on Upper Trapezius (UTr fatigue during a task cycle. Fatigue in the shoulder is one of the most important precursors for upper limb musculoskeletal disorders. One of the most prevalent musculoskeletal disorders between carpet weavers is disorder of the shoulder region. Methods: This cross-sectional study, included eight females and three males. During an 80-minute cycle of carpet weaving, Electromyography (EMG signals of right and left UTr were recorded by the surface EMG, continuously. After raw signals were processed, MPF and RMS were considered as EMG amplitude and frequency parameters. Time series model and JASA methods were used to assess and classify the EMG parameter changes during the working time. Results: According to the JASA method, 58%, 16%, 8% and 8% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the right UTr. Also, 50%, 25%, 8% and 16% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the left UTr. Conclusions: For the major portion of the weavers, dominant status in Left and right UTr was fatigue, at the proposed workstation during a carpet weaving task cycle. The results of the study provide detailed information for optimal design of workstations. Further studies should focus on fatigue in various muscles and time periods for designing an appropriate and ergonomics carpet weaving workstation

  6. BioPhotonics Workstation: a university tech transfer challenge

    Glückstad, Jesper; Bañas, Andrew Rafael; Tauro, Sandeep


    Conventional optical trapping or tweezing is often limited in the achievable trapping range because of high numerical aperture and imaging requirements. To circumvent this, we are developing a next generation BioPhotonics Workstation platform that supports extension modules through a long working...

  7. Initial experience with a nuclear medicine viewing workstation

    Witt, Robert M.; Burt, Robert W.


    Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.

  8. Post-deployment usability evaluation of a radiology workstation

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi; Oudkerk, Matthijs; van Ooijen, Peter


    Objective To evaluate the usability of a radiology workstation after deployment in a hospital. Significance In radiology, it is difficult to perform valid pre-deployment usability evaluations due to the heterogeneity of the user group, the complexity of the radiological workflow, and the complexity

  9. The design and use of reliability data base with analysis tool

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.


    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst`s current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs.

  10. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.


    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  11. A model for reliability analysis and calculation applied in an example from chemical industry

    Pejović Branko B.


    Full Text Available The subject of the paper is reliability design in polymerization processes that occur in reactors of a chemical industry. The designed model is used to determine the characteristics and indicators of reliability, which enabled the determination of basic factors that result in a poor development of a process. This would reduce the anticipated losses through the ability to control them, as well as enabling the improvement of the quality of production, which is the major goal of the paper. The reliability analysis and calculation uses the deductive method based on designing of a scheme for fault tree analysis of a system based on inductive conclusions. It involves the use standard logical symbols and rules of Boolean algebra and mathematical logic. The paper eventually gives the results of the work in the form of quantitative and qualitative reliability analysis of the observed process, which served to obtain complete information on the probability of top event in the process, as well as objective decision making and alternative solutions.

  12. A Study on Management Techniques of Power Telecommunication System by Reliability Analysis

    Lee, B.K.; Lee, B.S.; Woy, Y.H.; Oh, M.T.; Shin, M.T.; Kwan, O.G. [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center; Kim, K.H.; Kim, Y.H.; Lee, W.T.; Park, Y.H.; Lee, J.J.; Park, H.S.; Choi, M.C.; Kim, J. [Korea Electrotechnology Research Inst., Changwon (Korea, Republic of)


    Power telecommunication network is being increased rapidly in that expansion of power facilities according to the growth of electric power supply. The requirement of power facility and office automation and importance of communication services make it to complex and confusing to operate. And, for the sake of correspond to the change of power telecommunication network, effective operation and management is called for urgently. Therefore, the object of this study is to establish total reliability analysis system based on dependability, maintainability, cost effectiveness and replenishment for keep up reasonable reliability, support economical maintenance and reasonable planning of facility investment. And it will make effective management and administration system and schemes for total reliability improvement. (author). 44 refs., figs.

  13. Reliability Analysis of Component Software in Wireless Sensor Networks Based on Transformation of Testing Data

    Chunyan Hou


    Full Text Available We develop an approach of component software reliability analysis which includes the benefits of both time domain, and structure based approaches. This approach overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a method of transformation of testing data to cover both methods, and is expected to improve reliability prediction. This paradigm allows component-based software testing process doesn’t meet the assumption of NHPP models, and accounts for software structures by the way of modeling the testing process. According to the testing model it builds the mapping relation from the testing profile to the operational profile which enables the transformation of the testing data to build the reliability dataset required by NHPP models. At last an example is evaluated to validate and show the effectiveness of this approach.

  14. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)


    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  15. Statistical Degradation Models for Reliability Analysis in Non-Destructive Testing

    Chetvertakova, E. S.; Chimitova, E. V.


    In this paper, we consider the application of the statistical degradation models for reliability analysis in non-destructive testing. Such models enable to estimate the reliability function (the dependence of non-failure probability on time) for the fixed critical level using the information of the degradation paths of tested items. The most widely used models are the gamma and Wiener degradation models, in which the gamma or normal distributions are assumed as the distribution of degradation increments, respectively. Using the computer simulation technique, we have analysed the accuracy of the reliability estimates, obtained for considered models. The number of increments can be enlarged by increasing the sample size (the number of tested items) or by increasing the frequency of measuring degradation. It has been shown, that the sample size has a greater influence on the accuracy of the reliability estimates in comparison with the measuring frequency. Moreover, it has been shown that another important factor, influencing the accuracy of reliability estimation, is the duration of observing degradation process.

  16. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)


    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  17. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    Yu, Bo; Ning, Chao-lie; Li, Bing


    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  18. Reliability analysis for the 220 kV Libyan high voltage communication system

    Saleh, O.S.A.; AlAthram, A.Y. [General Electric Company of Libya (Libyan Arab Jamahiriya). Development Dept.


    Electric utilities are expanding their networks to include fiber-optic communications, which offer high capacity with reliable performance at low cost. Fiber-optic networks offer a feasible technical solution for leasing excess capacity. They can be readily deployed under a wide range of network configurations and can be upgraded rapidly. This study evaluated the reliability index for the communication network of Libya's 220 kV high voltage subsystem operated by the General Electric Company of Libya (GECOL). The schematic diagrams of the communication networks were presented for both power line carriers and fiber optics networks. A reliability analysis for the two communication networks was performed through the existing communication equipment. The reliability values revealed that the fiber optics system has several advantages such as a large bandwidth for high quality data transmission; immunity to electromagnetic interference; low attenuation which allows for extended cable transmission; ability to be used in dangerous environments; a higher degree of security; and, a high capacity through existing conduits due to its light weight and small diameter. However, it was noted that although fiber optic communications may be more reliable and provide the clearest signal, the powerline communication (PLC) system has more redundancy, particularly in the case of outdoor components where the PLC has more power line to carry the signals, while the fiber optic communications depend only on the earthing wire of the high voltage transmission line. 4 refs., 8 tabs., 6 figs.

  19. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    桂劲松; 刘红; 康海贵


    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.


    Z.-G. Zhou


    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  1. Implications in adjusting a gravity network with observations medium or independent: analysis of precision and reliability

    Pedro L. Faggion


    Full Text Available Adjustment strategies associated to the methodology applied used to the implantation of a gravity network of high precision in Paraná are presented. A network was implanted with stations in 21 places in the State of Paraná and one in the state of São Paulo To reduce the risk of the losing of points of that gravity network, they were established on the points of the GPS High Precision Network of Paraná, which possess a relatively homogeneous geographical distribution. For each one of the gravity lines belonging to the loops implanted for the network, it was possible to obtain three or six observations. In the first strategy, of adjustment investigated, for the net, it was considered, as observation, the medium value of the observations obtained for each gravity line. In the second strategy, of the adjustment, the observations were considered independent. The comparison of those strategies revealed that only the precision criteria is not enough to indicate the great solution of a gravity network. It was verified that there is need to use an additional criterion for analysis of the adjusted solution of the network, besides the precision criteria. The reliability criterion for geodesic networks, which becomes separated in reliability internal and external reliability it was used. The reliability internal it was used to verify the rigidity with which the network reacts in the detection and quantification of existent gross errors in the observations, and the reliability external in the quantification of the influence on the adjusted parameters of the errors non located. They are presented the aspects that differentiate the obtained solutions, when they combine the precision criteria and reliability criteria in the analysis of the quality of a gravity network.

  2. Reliability analysis of stochastic structural system considering static strength, stiffness and fatigue

    AN WeiGuang; ZHAO WeiTao; AN Hai


    Multi-failures are possible to appear in the process of using the structural system,such as dead load failure, fatigue failure and stiffness failure. The expression of residual resistance is given based on the impact of random crack propagation induced by the fatigue load on the critical limit stress and section modulus in this paper. The failure modes of every element of the structural system are analyzed under dead and fatigue loads, and the influence of the correlation of failure modes on reliability of the element is considered. Failure mechanism and the correlation of failure modes under dead and fatigue loads are discussed, and the method of reliability analysis considering static strength, fatigue and stiffness is given. A numerical example is analyzed, which indicates that the failure probability is different for different use life and the influence of dead and fatigue loads on reliability of the structural system is different as well. This method of reliability analysis, in the paper, is better than the method only considering a single factor (or static strength, or fatigue, or stiffness, etc.) in the case of practical engineering.

  3. Application of Support Vector Machine to Reliability Analysis of Engine Systems

    Zhang Xinfeng


    Full Text Available Reliability analysis plays a very important role for assessing the performance and making maintenance plans of engine systems. This research presents a comparative study of the predictive performances of support vector machines (SVM , least square support vector machine (LSSVM and neural network time series models for forecasting failures and reliability in engine systems. Further, the reliability indexes of engine systems are computed by the weibull probability paper programmed with Matlab. The results shows that the probability distribution of the forecasting outcomes is consistent to the distribution of the actual data, which all follow weibull distribution and the predictions by SVM and LSSVM can provide accurate predictions of the characteristic life. So SVM and LSSVM are both another choice of engine system reliability analysis. Moreover, the predictive precise of the method based on LSSVM is higher than that of SVM. In small samples, the prediction by LSSVM will be more popular, because its compution cost is lower and the precise can be more satisfied.

  4. Reliability and Sensitivity Analysis of Transonic Flutter Using Improved Line Sampling Technique

    Song Shufang; Lu Zhenzhou; Zhang Weiwei; Ye Zhengyin


    The improved line sampling (LS) technique, an effective numerical simulation method, is employed to analyze the probabilistic characteristics and reliability sensitivity of flutter with random structural parameter in transonic flow. The improved LS technique is a novel methodology for reliability and sensitivity analysis of high dimensionality and low probability problem with implicit limit state function, and it does not require any approximating surrogate of the implicit limit state equation. The improved LS is used to estimate the flutter reliability and the sensitivity of a two-dimensional wing, in which some structural properties, such as frequency, parameters of gravity center and mass ratio, are considered as random variables. Computational fluid dynamics (CFD) based unsteady aerodynamic reduced order model (ROM) method is used to construct the aerodynamic state equations. Coupling structural state equations with aerodynamic state equations, the safety margin of flutter is founded by using the critical velocity of flutter. The results show that the improved LS technique can effectively decrease the computational cost in the random uncertainty analysis of flutter. The reliability sensitivity, defined by the partial derivative of the failure probability with respect to the distribution parameter of random variable, can help to identify the important parameters and guide the structural optimization design.

  5. Evaluating the safety risk of roadside features for rural two-lane roads using reliability analysis.

    Jalayer, Mohammad; Zhou, Huaguo


    The severity of roadway departure crashes mainly depends on the roadside features, including the sideslope, fixed-object density, offset from fixed objects, and shoulder width. Common engineering countermeasures to improve roadside safety include: cross section improvements, hazard removal or modification, and delineation. It is not always feasible to maintain an object-free and smooth roadside clear zone as recommended in design guidelines. Currently, clear zone width and sideslope are used to determine roadside hazard ratings (RHRs) to quantify the roadside safety of rural two-lane roadways on a seven-point pictorial scale. Since these two variables are continuous and can be treated as random, probabilistic analysis can be applied as an alternative method to address existing uncertainties. Specifically, using reliability analysis, it is possible to quantify roadside safety levels by treating the clear zone width and sideslope as two continuous, rather than discrete, variables. The objective of this manuscript is to present a new approach for defining the reliability index for measuring roadside safety on rural two-lane roads. To evaluate the proposed approach, we gathered five years (2009-2013) of Illinois run-off-road (ROR) crash data and identified the roadside features (i.e., clear zone widths and sideslopes) of 4500 300ft roadway segments. Based on the obtained results, we confirm that reliability indices can serve as indicators to gauge safety levels, such that the greater the reliability index value, the lower the ROR crash rate.

  6. Moment Method Based on Fuzzy Reliability Sensitivity Analysis for a Degradable Structural System

    Song Jun; Lu Zhenzhou


    For a degradable structural system with fuzzy failure region, a moment method based on fuzzy reliability sensitivity algorithm is presented. According to the value assignment of porformance function, the integral region for calculating the fuzzy failure probability is first split into a series of subregions in which the membership function values of the performance function within the fuzzy failure region can be approximated by a set of constants. The fuzzy failure probability is then transformed into a sum of products oftbe random failure probabilities and the approximate constants of the membership function in the subregions. Furthermore, the fuzzy reliability sensitivity analysis is transformed into a series of random reliability sensitivity analysis, and the random reliability sensitivity can be obtained by the constructed moment method. The primary advantages of the presented method include higher efficiency for implicit performance function with low and medium dimensionality and wide applicability to multiple failure modes and nonnormal basic random variables. The limitation is that the required computation effort grows exponentially with the increase of dimensionality of the basic random vari-able; hence, it is not suitable for high dimensionality problem. Compared with the available methods, the presented one is pretty com-petitive in the case that the dimensionality is lower than 10. The presented examples are used to verify the advantages and indicate the limitations.

  7. Analysis of reliability metrics and quality enhancement measures in current density imaging.

    Foomany, F H; Beheshti, M; Magtibay, K; Masse, S; Foltz, W; Sevaptsidis, E; Lai, P; Jaffray, D A; Krishnan, S; Nanthakumar, K; Umapathy, K


    Low frequency current density imaging (LFCDI) is a magnetic resonance imaging (MRI) technique which enables calculation of current pathways within the medium of study. The induced current produces a magnetic flux which presents itself in phase images obtained through MRI scanning. A class of LFCDI challenges arises from the subject rotation requirement, which calls for reliability analysis metrics and specific image registration techniques. In this study these challenges are formulated and in light of proposed discussions, the reliability analysis of calculation of current pathways in a designed phantom and a pig heart is presented. The current passed is measured with less than 5% error for phantom, using CDI method. It is shown that Gauss's law for magnetism can be treated as reliability metric in matching the images in two orientations. For the phantom and pig heart the usefulness of image registration for mitigation of rotation errors is demonstrated. The reliability metric provides a good representation of the degree of correspondence between images in two orientations for phantom and pig heart. In our CDI experiments this metric produced values of 95% and 26%, for phantom, and 88% and 75% for pig heart, for mismatch rotations of 0 and 20 degrees respectively.

  8. Assessing validity and reliability of Resting Metabolic Rate in six gas analysis systems

    Cooper, Jamie A.; Watras, Abigail C.; O’Brien, Matthew J.; Luke, Amy; Dobratz, Jennifer R.; Earthman, Carrie P.; Schoeller, Dale A.


    The Deltatrac Metabolic Monitor (DTC), one of the most popular indirect calorimetry systems for measuring resting metabolic rate (RMR) in human subjects, is no longer being manufactured. This study compared five different gas analysis systems to the DTC. Resting metabolic rate was measured by the DTC and at least one other instrument at three study sites for a total of 38 participants. The five indirect calorimetry systems included: MedGraphics CPX Ultima, MedGem, Vmax Encore 29 System, TrueOne 2400, and Korr ReeVue. Validity was assessed using paired t-tests to compare means while reliability was assessed by using both paired t-tests and root mean square calculations with F tests for significance. Within-subject comparisons for validity of RMR revealed a significant difference between the DTC and Ultima. Bland-Altman plot analysis showed significant bias with increasing RMR values for the Korr and MedGem. Respiratory exchange ratio (RER) analysis showed a significant difference between the DTC and the Ultima and a trend for a difference with the Vmax (p = 0.09). Reliability assessment for RMR revealed that all instruments had a significantly larger coefficient of variation (CV) (ranging from 4.8% to 10.9%) for RMR compared to the 3.0 % CV for the DTC. Reliability assessment for RER data showed none of the instrument CV’s were significantly larger than the DTC CV. The results were quite disappointing, with none of the instruments equaling the within person reliability of the DTC. The TrueOne and Vmax were the most valid instruments in comparison with the DTC for both RMR and RER assessment. Further testing is needed to identify an instrument with the reliability and validity of the DTC. PMID:19103333

  9. Space Shuttle Rudder Speed Brake Actuator-A Case Study Probabilistic Fatigue Life and Reliability Analysis

    Oswald, Fred B.; Savage, Michael; Zaretsky, Erwin V.


    The U.S. Space Shuttle fleet was originally intended to have a life of 100 flights for each vehicle, lasting over a 10-year period, with minimal scheduled maintenance or inspection. The first space shuttle flight was that of the Space Shuttle Columbia (OV-102), launched April 12, 1981. The disaster that destroyed Columbia occurred on its 28th flight, February 1, 2003, nearly 22 years after its first launch. In order to minimize risk of losing another Space Shuttle, a probabilistic life and reliability analysis was conducted for the Space Shuttle rudder/speed brake actuators to determine the number of flights the actuators could sustain. A life and reliability assessment of the actuator gears was performed in two stages: a contact stress fatigue model and a gear tooth bending fatigue model. For the contact stress analysis, the Lundberg-Palmgren bearing life theory was expanded to include gear-surface pitting for the actuator as a system. The mission spectrum of the Space Shuttle rudder/speed brake actuator was combined into equivalent effective hinge moment loads including an actuator input preload for the contact stress fatigue and tooth bending fatigue models. Gear system reliabilities are reported for both models and their combination. Reliability of the actuator bearings was analyzed separately, based on data provided by the actuator manufacturer. As a result of the analysis, the reliability of one half of a single actuator was calculated to be 98.6 percent for 12 flights. Accordingly, each actuator was subsequently limited to 12 flights before removal from service in the Space Shuttle.

  10. Reliability, risk and availability analysis and evaluation of a port oil pipeline transportation system in constant operation conditions

    Kolowrocki, Krzysztof [Gdynia Maritime University, Gdynia (Poland)


    In the paper the multi-state approach to the analysis and evaluation of systems' reliability, risk and availability is practically applied. Theoretical definitions and results are illustrated by the example of their application in the reliability, risk and availability evaluation of an oil pipeline transportation system. The pipeline transportation system is considered in the constant in time operation conditions. The system reliability structure and its components reliability functions are not changing in constant operation conditions. The system reliability structure is fixed with a high accuracy. Whereas, the input reliability characteristics of the pipeline components are not sufficiently exact because of the lack of statistical data necessary for their estimation. The results may be considered as an illustration of the proposed methods possibilities of applications in pipeline systems reliability analysis. (author)

  11. Reliability Analysis of Brittle Material Structures - Including MEMS(?) - With the CARES/Life Program

    Nemeth, Noel N.


    Brittle materials are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts. thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The CARES/Life code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. For this presentation an interview of the CARES/Life program will be provided. Emphasis will be placed on describing the latest enhancements to the code for reliability analysis with time varying loads and temperatures (fully transient reliability analysis). Also, early efforts in investigating the validity of using Weibull statistics, the basis of the CARES/Life program, to characterize the strength of MEMS structures will be described as as well as the version of CARES/Life for MEMS (CARES/MEMS) being prepared which incorporates single crystal and edge flaw reliability analysis capability. It is hoped this talk will open a dialog for potential collaboration in the area of MEMS testing and life prediction.

  12. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.


    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.


    Galkina Elena Vladislavovna


    Full Text Available In the article the reliability analysis methods of bidders and their tenders offers for implementation of construction works are offered. The special attention is focused on the complexity of these processes and the necessity of participation of serious, professional and responsible executors. Application of the described methods leads to a conclusion on the decrease of risks related to selection of a participant of a construction pro-ject. In the article the main stages of the implementation procedure are defined. That allows considering the economic state of applicants, and both economic and technical indicators of tender offers’ reliability. The main characteristics to be considered on each stage are revealed. The author makes a conclusion that the reliability of bidders is determined by the comparison of their economic states with the capacities for implementation of the orders with the specified characteristics. According to the terminology of the article, the reliability of applicant’s of-fers is the ability to execute orders on the bidder’s own conditions. In addition the author states that determining the reliability is based on the comparison of tender offers and contender’s characteristics of objects. The rational methods to compare economic indicators are offered. Along with this, it was found out that at the current moment the method of comparing the technical indicators for the projects-analogues with the indicators of a bidder’s object is not formalized. It limits the application of this method. Finally, it was concluded that the development of the methods applied to technical indicators provided a coherent system for evaluating the reliability of the construction bidders and their offers. It allows creating the basis for the development of appropriate automated system that can be used both for selection of competitive organizations and for preparation of offers by the applicants.

  14. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    Iskandar, Ismed; Satria Gondokaryono, Yudi


    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  15. Efficient Approximate Method of Global Reliability Analysis for Offshore Platforms in the Ice Zone


    Ice load is the dominative load in the design of offshore platforms in the ice zone, and the extreme ice load is the key factor that affects the safety of platforms. The present paper studies the statistical properties of the global resistance and the extreme responses of the jacket platforms in Bohai Bay, considering the randomness of ice load, dead load, steel elastic modulus, yield strength and structural member dimensions. Then, based on the above results, an efficient approximate method of the global reliability analysis for the offshore platforms is proposed, which converts the implicit nonlinear performance function in the conventional reliability analysis to linear explicit one. Finally, numerical examples of JZ20-2 MSW, JZ20-2NW and JZ20-2 MUQ offshore jacket platforms in the Bohai Bay demonstrate the satisfying efficiency, accuracy and applicability of the proposed method.

  16. Effect of wine dilution on the reliability of tannin analysis by protein precipitation

    Jensen, Jacob Skibsted; Werge, Hans Henrik Malmborg; Egebo, Max


    A reported analytical method for tannin quantification relies on selective precipitation of tannins with bovine serum albumin. The reliability of tannin analysis by protein precipitation on wines having variable tannin levels was evaluated by measuring the tannin concentration of various dilutions...... of five commercial red wines. Tannin concentrations of both very diluted and concentrated samples were systematically underestimated, which could be explained by a precipitation threshold and insufficient protein for precipitation, respectively. Based on these findings, we have defined a valid range...

  17. Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report


    07 planning conference 14 Dec 06 II Marine Expeditionary Force (MEF) meeting with Major Smith 14 Dec 06 Gulf of Mexico Tyndall Air Force Base Missile...Restructured action item spreadsheet " Reviewed the following storyboards (functional flow, graphics and text): 1. 050101 Main Rotor System components 2... storyboards (functional flow, graphics, and text): o 050101 Main Rotor System components. Reliability Information Analysis Center 6000 Flanagan Road

  18. A Reliability Analysis of a Rainfall Harvesting System in Southern Italy

    Lorena Liuzzo; Vincenza Notaro; Gabriele Freni


    Rainwater harvesting (RWH) may be an effective alternative water supply solution in regions affected by water scarcity. It has recently become a particularly important option in arid and semi-arid areas (like Mediterranean basins), mostly because of its many benefits and affordable costs. This study provides an analysis of the reliability of using a rainwater harvesting system to supply water for toilet flushing and garden irrigation purposes, with reference to a single-family home in a resid...

  19. Application of the Simulation Based Reliability Analysis on the LBB methodology

    Pečínka L.; Švrček M.


    Guidelines on how to demonstrate the existence of Leak Before Break (LBB) have been developed in many western countries. These guidelines, partly based on NUREG/CR-6765, define the steps that should be fulfilled to get a conservative assessment of LBB acceptability. As a complement and also to help identify the key parameters that influence the resulting leakage and failure probabilities, the application of Simulation Based Reliability Analysis is under development. The used methodology will ...

  20. Towards increased reliability by objectification of Hazard Analysis and Risk Assessment (HARA) of automated automotive systems

    Khastgir, Siddartha; Birrell, Stewart A.; Dhadyalla, Gunwant; Sivencrona, Håkan; Jennings, P. A. (Paul A.)


    Hazard Analysis and Risk Assessment (HARA) in various domains like automotive, aviation, process industry etc. suffer from the issues of validity and reliability. While there has been an increasing appreciation of this subject, there have been limited approaches to overcome these issues. In the automotive domain, HARA is influenced by the ISO 26262 international standard which details functional safety of road vehicles. While ISO 26262 was a major step towards analysing hazards and risks, lik...

  1. An evaluation of the reliability and usefulness of external-initiator PRA (probabilistic risk analysis) methodologies

    Budnitz, R.J.; Lambert, H.E. (Future Resources Associates, Inc., Berkeley, CA (USA))


    The discipline of probabilistic risk analysis (PRA) has become so mature in recent years that it is now being used routinely to assist decision-making throughout the nuclear industry. This includes decision-making that affects design, construction, operation, maintenance, and regulation. Unfortunately, not all sub-areas within the larger discipline of PRA are equally mature,'' and therefore the many different types of engineering insights from PRA are not all equally reliable. 93 refs., 4 figs., 1 tab.

  2. Intraoperative non-record-keeping usage of anesthesia information management system workstations and associated hemodynamic variability and aberrancies.

    Wax, David B; Lin, Hung-Mo; Reich, David L


    Anesthesia information management system workstations in the anesthesia workspace that allow usage of non-record-keeping applications could lead to distraction from patient care. We evaluated whether non-record-keeping usage of the computer workstation was associated with hemodynamic variability and aberrancies. Auditing data were collected on eight anesthesia information management system workstations and linked to their corresponding electronic anesthesia records to identify which application was active at any given time during the case. For each case, the periods spent using the anesthesia information management system record-keeping module were separated from those spent using non-record-keeping applications. The variability of heart rate and blood pressure were also calculated, as were the incidence of hypotension, hypertension, and tachycardia. Analysis was performed to identify whether non-record-keeping activity was a significant predictor of these hemodynamic outcomes. Data were analyzed for 1,061 cases performed by 171 clinicians. Median (interquartile range) non-record-keeping activity time was 14 (1, 38) min, representing 16 (3, 33)% of a median 80 (39, 143) min of procedure time. Variables associated with greater non-record-keeping activity included attending anesthesiologists working unassisted, longer case duration, lower American Society of Anesthesiologists status, and general anesthesia. Overall, there was no independent association between non-record-keeping workstation use and hemodynamic variability or aberrancies during anesthesia either between cases or within cases. Anesthesia providers spent sizable portions of case time performing non-record-keeping applications on anesthesia information management system workstations. This use, however, was not independently associated with greater hemodynamic variability or aberrancies in patients during maintenance of general anesthesia for predominantly general surgical and gynecologic procedures.

  3. Reliability and Security Analysis on Two-Cell Dynamic Redundant System

    Hongsheng Su


    Full Text Available Based on analysis on reliability and security on three types of two-cell dynamic redundant systems which has been widely applied in modern railway signal system, whose isomorphic Markov model was established in this paper. During modeling several important factors, including common-cause failure, coverage of diagnostic systems, online maintainability, and periodic inspection maintenance, and as well as many failure modes, were considered, which made the established model more credible. Through analysis and calculation on reliability and security indexes of the three types of two-module dynamic redundant structures, the paper acquires a significant conclusion, i.e., the safety and reliability of the kind of structure possesses an upper limit, and can not be inordinately improved through the hardware and software comparison methods under the failure and repairing rate fixed. Finally, the paper performs the simulation investigations, and compares the calculation results of the three redundant systems, and analysis each advantages and disadvantages, and gives out each application scope, which provides a theoretical technical support for the railway signal equipments selection.

  4. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin


    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.

  5. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    Matthew Bucknor


    Full Text Available Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general for the postulated transient event.

  6. Advanced reactor passive system reliability demonstration analysis for an external event

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin [Argonne National Laboratory, Argonne (United States)


    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.

  7. Time-dependent Reliability Analysis of Flood Defence Assets Using Generic Fragility Curve

    Nepal Jaya


    Full Text Available Flood defence assets such as earth embankments comprise the vital part of linear flood defences in many countries including the UK and protect inland from flooding. The risks of flooding are likely to increase in the future due to increasing pressure on land use, increasing rainfall events and rising sea level caused by climate change also affect aging flood defence assets. Therefore, it is important that the flood defence assets are maintained at a high level of safety and serviceability. The high costs associated with preserving these deteriorating flood defence assets and the limited funds available for their maintenance require the development of systematic approaches to ensure the sustainable flood-risk management system. The integration of realistic deterioration measurement and reliabilitybased performance assessment techniques has tremendous potential for structural safety and economic feasibility of flood defence assets. Therefore, the need for reliability-based performance assessment is evident. However, investigations on time-dependent reliability analysis of flood defence assets are limited. This paper presents a novel approach for time-dependent reliability analysis of flood defence assets. In the analysis, time-dependent fragility curve is developed by using the state-based stochastic deterioration model. The applicability of the proposed approach is then demonstrated with a case study.

  8. Reliability Analysis of a 3-Machine Power Station Using State Space Approach

    WasiuAkande Ahmed


    Full Text Available With the advent of high-integrity fault-tolerant systems, the ability to account for repairs of partially failed (but still operational systems become increasingly important. This paper presents a systemic method of determining the reliability of a 3-machine electric power station, taking into consideration the failure rates and repair rates of the individual component (machine that make up the system. A state-space transition process for a 3-machine with 23 states was developed and consequently, steady state equations were generated based on Markov mathematical modeling of the power station. Important reliability components were deduced from this analysis. This research simulation was achieved with codes written in Excel® -VBA programming environment. System reliability using state space approach proofs to be a viable and efficient technique of reliability prediction as it is able to predict the state of the system under consideration. For the purpose of neatness and easy entry of data, Graphic User Interface (GUI was designed.

  9. A new approach for interexaminer reliability data analysis on dental caries calibration

    Andréa Videira Assaf


    Full Text Available Objectives: a to evaluate the interexaminer reliability in caries detection considering different diagnostic thresholds and b to indicate, by using Kappa statistics, the best way of measuring interexaminer agreement during the calibration process in dental caries surveys. Methods: Eleven dentists participated in the initial training, which was divided into theoretical discussions and practical activities, and calibration exercises, performed at baseline, 3 and 6 months after the initial training. For the examinations of 6-7-year-old schoolchildren, the World Health Organization (WHO recommendations were followed and different diagnostic thresholds were used: WHO (decayed/missing/filled teeth - DMFT index and WHO + IL (initial lesion diagnostic thresholds. The interexaminer reliability was calculated by Kappa statistics, according to WHO and WHO+IL thresholds considering: a the entire dentition; b upper/lower jaws; c sextants; d each tooth individually. Results: Interexaminer reliability was high for both diagnostic thresholds; nevertheless, it decreased in all calibration sections when considering teeth individually. Conclusion: The interexaminer reliability was possible during the period of 6 months, under both caries diagnosis thresholds. However, great disagreement was observed for posterior teeth, especially using the WHO+IL criteria. Analysis considering dental elements individually was the best way of detecting interexaminer disagreement during the calibration sections.

  10. Reliability analysis of the objective structured clinical examination using generalizability theory

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián


    Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements. PMID:27543188

  11. Reliability analysis of the objective structured clinical examination using generalizability theory

    Juan Andrés Trejo-Mejía


    Full Text Available Background: The objective structured clinical examination (OSCE is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods: An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results: The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions: Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  12. Reliability Analysis of Aircraft Condition Monitoring Network Using an Enhanced BDD Algorithm

    ZHAO Changxiao; CHEN Yao; WANG Hailiang; XIONG Huagang


    The aircraft condition monitoring network is responsible for collecting the status of each component in aircraft.The reliability of this network has a significant effect on safety of the aircraft.The aircraft condition monitoring network works in a real-time manner that all the data should be transmitted within the deadline to ensure that the control center makes proper decision in time.Only the connectedness between the source node and destination cannot guarantee the data to be transmitted in time.In this paper,we take the time deadline into account and build the task-based reliability model.The binary decision diagram (BDD),which has the merit of efficiency in computing and storage space,is introduced when calculating the reliability of the network and addressing the essential variable.A case is analyzed using the algorithm proposed in this paper.The experimental results show that our method is efficient and proper for the reliability analysis of the real-time network.

  13. Development of Markov model of emergency diesel generator for dynamic reliability analysis

    Jin, Young Ho; Choi, Sun Yeong; Yang, Joon Eon [Korea Atomic Energy Research Institute, Taejon (Korea)


    The EDG (Emergency Diesal Generator) of nuclear power plant is one of the most important equipments in mitigating accidents. The FT (Fault Tree) method is widely used to assess the reliability of safety systems like an EDG in nuclear power plant. This method, however, has limitations in modeling dynamic features of safety systems exactly. We, hence, have developed a Markov model to represent the stochastic process of dynamic systems whose states change as time moves on. The Markov model enables us to develop a dynamic reliability model of EDG. This model can represent all possible states of EDG comparing to the FRANTIC code developed by U.S. NRC for the reliability analysis of standby systems. to access the regulation policy for test interval, we performed two simulations based on the generic data and plant specific data of YGN 3, respectively by using the developed model. We also estimate the effects of various repair rates and the fractions of starting failures by demand shock to the reliability of EDG. And finally, Aging effect is analyzed. (author). 23 refs., 19 figs., 9 tabs.

  14. How to Protect Patients Digital Images/Thermograms Stored on a Local Workstation

    J. Živčák


    Full Text Available To ensure the security and privacy of patient electronic medical information stored on local workstations in doctors’ offices, clinic centers, etc., it is necessary to implement a secure and reliable method for logging on and accessing this information. Biometrically-based identification technologies use measurable personal properties (physiological or behavioral such as a fingerprint in order to identify or verify a person’s identity, and provide the foundation for highly secure personal identification, verification and/or authentication solutions. The use of biometric devices (fingerprint readers is an easy and secure way to log on to the system. We have provided practical tests on HP notebooks that have the fingerprint reader integrated. Successful/failed logons have been monitored and analyzed, and calculations have been made. This paper presents the false rejection rates, false acceptance rates and failure to acquire rates.

  15. Probabilistic Structural Analysis and Reliability Using NESSUS With Implemented Material Strength Degradation Model

    Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)


    This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in

  16. Reliability of automated biometrics in the analysis of enamel rod end patterns

    K Manjunath


    Full Text Available Tooth prints are enamel rod end patterns on the tooth surface. These patterns are unique to an individual tooth of same individual and different individuals. The aim of this study was to analyze the reliability and sensitivity of an automated biometrics software (Verifinger ® standard SDK version 5.0 in analyzing tooth prints. In present study, enamel rod end patterns were obtained three times from a specific area on the labial surface of ten extracted teeth using acetate peel technique. The acetate peels were subjected to analysis with Verifinger ® standard SDK version 5.0 software to obtain the enamel rod end patterns (tooth prints and respective minutiae scores for each tooth print. The minutiae scores obtained for each tooth print was subjected to statistical analysis using Cronbach′s test for reliability. In the present study, it was found that Verifinger ® software was able to identify duplicate records of the same area of a same tooth with the original records stored on the database of the software. Comparison of the minutiae scores using Cronbach′s test also showed that there was no significant difference in the minutiae scores obtained (>0.6. Hence, acetate peel technique with Verifinger ® standard SDK version 5.0 is a reliable technique in analysis of enamel rod end patterns, and as a forensic tool in personal identification. But, further studies are needed to verify the reliability to this technique in a clinical setting, as obtaining an acetate peel record from the same area of the tooth in-vivo, is difficult.

  17. An integrated distributed processing interface for supercomputers and workstations

    Campbell, J.; McGavran, L.


    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  18. Functionalized 2PP structures for the BioPhotonics Workstation

    Matsuoka, Tomoyo; Nishi, Masayuki; Sakakura, Masaaki


    In its standard version, our BioPhotonics Workstation (BWS) can generate multiple controllable counter-propagating beams to create real-time user-programmable optical traps for stable three-dimensional control and manipulation of a plurality of particles. The combination of the platform with micr......In its standard version, our BioPhotonics Workstation (BWS) can generate multiple controllable counter-propagating beams to create real-time user-programmable optical traps for stable three-dimensional control and manipulation of a plurality of particles. The combination of the platform...... with microstructures fabricated by two-photon polymerization (2PP) can lead to completely new methods to communicate with micro- and nano-sized objects in 3D and potentially open enormous possibilities in nano-biophotonics applications. In this work, we demonstrate that the structures can be used as microsensors...

  19. Posture And Dorsal Shape At A Sitted Workstation

    Lepoutre, F. X.; Cloup, P.; Guerra, T. M.


    The ergonomic analysis of a control or a supervision workstation for a vehicle or a process, necessitates to take into account the biomecanical visuo-postural system. The measurements, which are necessary to do, must give informations about the spatial direction of the limbs, the dorsal shape, eventually the eyes direction, and the postural evolution during the working time. More, the smallness of the work station, the backrest and sometime a vibratory environment made use specific, strong and small devices wich do not disturb the operator. The measurement system which we propose is made of an optical device. This system is studied in relation with the french "Institute de Recherche pour les Transports" for an ergonomic analysis of a truck cabin. The optical device consists on placing on the body of the driver on particular places materializing specially members and trunck joint points, some drops which reflect the infra-red raies coming from a specific light. Several cameras whose relative positions depend on the experiment site, transmit video signals to the associated treatment systems which extract the coordinates (Xi, Yi) of each drop in the observation scope of any camera. By regrouping the informations obtained from every view, it is possible to obtain the spatial drop position and then to restore the individual's posture in three dimensions. Therefore, this device doesn't enable us, in consideration of the backrest, to analyse the dorsal posture, which is important with regard to dorsal pains frequency. For that reason, we complete the measurements by using a "curvometer". This device consists of a flexible stick fixed upon the individual back with elastic belts, whose distorsions (curvature in m-1) are measured, in the individual's sagittal plane, with 4 strain gauges pairs; located approximately at the level of vertebra D1, D6, D10 and L3. A fifth measurement, concerning the inclination (in degree) of the lower part of the stick, makes it is possible to

  20. A reliable procedure for the analysis of multiexponential transients that arise in deep level transient spectroscopy

    Hanine, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)]. E-mail:; Masmoudi, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France); Marcon, J. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)


    In this paper, a reliable procedure, which allows a fine as well as a robust analysis of the deep defects in semiconductors, is detailed. In this procedure where capacitance transients are considered as multiexponential and corrupted with Gaussian noise, our new method of analysis, the Levenberg-Marquardt deep level transient spectroscopy (LM-DLTS) is associated with two other high-resolution techniques, i.e. the Matrix Pencil which provides an approximation of exponential components contained in the capacitance transients and Prony's method recently revised by Osborne in order to set the initial parameters.

  1. Integrated model for line balancing with workstation inventory management

    Dilip Roy; Debdip khan


    In this paper, we address the optimization of an integrated line balancing process with workstation inventory management. While doing so, we have studied the interconnection between line balancing and its conversion process. Almost each and every moderate to large manufacturing industry depends on a long and integrated supply chain, consisting of inbound logistic, conversion process and outbound logistic. In this sense an approach addresses a very general problem of integrated line balancing....

  2. Motivating Ergonomic Computer Workstation Setup: Sometimes Training Is Not Enough

    Sigurdsson, Sigurdur O.; Artnak, Melissa; Needham, Mick; Wirth, Oliver; Silverman, Kenneth


    Musculoskeletal disorders lead to pain and suffering and result in high costs to industry. There is evidence to suggest that whereas conventional ergonomics training programs result in knowledge gains, they may not necessarily translate to changes in behavior. There were 11 participants in an ergonomics training program, and a subsample of participants received a motivational intervention in the form of incentives for correct workstation setup. Training did not yield any changes in ergonomics...

  3. Reliability analysis of stochastic structural system considering static strength, stiffness and fatigue


    Multi-failures are possible to appear in the process of using the structural system, such as dead load failure, fatigue failure and stiffness failure. The expression of residual resistance is given based on the impact of random crack propagation in- duced by the fatigue load on the critical limit stress and section modulus in this paper. The failure modes of every element of the structural system are analyzed under dead and fatigue loads, and the influence of the correlation of failure modes on reliability of the element is considered. Failure mechanism and the correlation of failure modes under dead and fatigue loads are discussed, and the method of reli- ability analysis considering static strength, fatigue and stiffness is given. A nu- merical example is analyzed, which indicates that the failure probability is different for different use life and the influence of dead and fatigue loads on reliability of the structural system is different as well. This method of reliability analysis, in the pa- per, is better than the method only considering a single factor (or static strength, or fatigue, or stiffness, etc.) in the case of practical engineering.

  4. Based on Weibull Information Fusion Analysis Semiconductors Quality the Key Technology of Manufacturing Execution Systems Reliability

    Huang, Zhi-Hui; Tang, Ying-Chun; Dai, Kai


    Semiconductor materials and Product qualified rate are directly related to the manufacturing costs and survival of the enterprise. Application a dynamic reliability growth analysis method studies manufacturing execution system reliability growth to improve product quality. Refer to classical Duane model assumptions and tracking growth forecasts the TGP programming model, through the failure data, established the Weibull distribution model. Combining with the median rank of average rank method, through linear regression and least squares estimation method, match respectively weibull information fusion reliability growth curve. This assumption model overcome Duane model a weakness which is MTBF point estimation accuracy is not high, through the analysis of the failure data show that the method is an instance of the test and evaluation modeling process are basically identical. Median rank in the statistics is used to determine the method of random variable distribution function, which is a good way to solve the problem of complex systems such as the limited sample size. Therefore this method has great engineering application value.


    ZHAO Yongxiang; PENG Jiachun; YANG Bing


    A state-of-art review is given to the new advances on fatigue reliability design and analysis methods of Chinese railway vehicle's structures. First, the structures are subject to a complicated random fatigue stressing history and this history should be determined by combining dynamic Simulation and on-line inspection. Second, the random fatigue constitutions belong to an intrinsic fatigue phenomenon and a probabilistic model is developed to well describe them with two measurements of survival probability and confidence, similar model is also presented for the random stress-life relations and extrapolated appropriately into long fatigue life regime. Third, concept of the fatigue limit should be understood as the fatigue strength at a given fatigue life and a so-called local Basquin model method is proposed for measuring the random strengths. In addition, drawing and application methods of the Goodman-Smith diagram for integrally characterizing the random fatigue strengths are established in terms of ten kilometers. Fourth, a reliability stress-based method is constructed with a consideration of the random constitutive relations. These new advances form a new frame work for railway fatigue reliability design and analysis.

  6. A New 3-Dimensional Dynamic Quantitative Analysis System of Facial Motion: An Establishment and Reliability Test

    Feng, Guodong; Zhao, Yang; Tian, Xu; Gao, Zhiqiang


    This study aimed to establish a 3-dimensional dynamic quantitative facial motion analysis system, and then determine its accuracy and test-retest reliability. The system could automatically reconstruct the motion of the observational points. Standardized T-shaped rod and L-shaped rods were used to evaluate the static and dynamic accuracy of the system. Nineteen healthy volunteers were recruited to test the reliability of the system. The average static distance error measurement was 0.19 mm, and the average angular error was 0.29°. The measuring results decreased with the increase of distance between the cameras and objects, 80 cm of which was considered to be optimal. It took only 58 seconds to perform the full facial measurement process. The average intra-class correlation coefficient for distance measurement and angular measurement was 0.973 and 0.794 respectively. The results demonstrated that we successfully established a practical 3-dimensional dynamic quantitative analysis system that is accurate and reliable enough to meet both clinical and research needs. PMID:25390881

  7. Using wavefront coding technique as an optical encryption system: reliability analysis and vulnerabilities assessment

    Konnik, Mikhail V.


    Wavefront coding paradigm can be used not only for compensation of aberrations and depth-of-field improvement but also for an optical encryption. An optical convolution of the image with the PSF occurs when a diffractive optical element (DOE) with a known point spread function (PSF) is placed in the optical path. In this case, an optically encoded image is registered instead of the true image. Decoding of the registered image can be performed using standard digital deconvolution methods. In such class of optical-digital systems, the PSF of the DOE is used as an encryption key. Therefore, a reliability and cryptographic resistance of such an encryption method depends on the size and complexity of the PSF used for optical encoding. This paper gives a preliminary analysis on reliability and possible vulnerabilities of such an encryption method. Experimental results on brute-force attack on the optically encrypted images are presented. Reliability estimation of optical coding based on wavefront coding paradigm is evaluated. An analysis of possible vulnerabilities is provided.

  8. Resource allocatiion: sequential data collection for reliability analysis involving systems and component level data

    Anderson-cooke, Christine M [Los Alamos National Laboratory


    In analyzing the reliability of complex systems, several types of data from full-system tests to component level tests are commonly available and are used. After a preliminary analysis, additional resources may be available to collect new data. The goal of resource allocation is to identify the best new data to collect to maximally improve the prediction of system reliability. While several possible definitions of 'maximally improve' are possible, we focus on reducing the uncertainty or the width of the uncertainty interval for the prediction of system reliability at a user-specified age(s). In this paper, we present an algorithm that allows us to estimate the anticipated improvement to the analysis with the addition of new data, based on current understanding of all of the statistical model parameters. This quantitative assessment of the anticipated improvement can be helpful to justify the benefits of collecting new data. Additionally by comparing different potential allocations, it is possible to determine what new data should be collected to improve our understanding of the response. This optimization takes into account the relative cost of different data types and can be based on flexible allocation options, or subject to logistical constraints.

  9. Stress and Reliability Analysis of a Metal-Ceramic Dental Crown

    Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.


    Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.

  10. Validity, reliability and factor analysis of Persian version of schizophrenia quality of life scale.

    Masaeli, Nasrin; Omranifard, Victoria; Maracy, Mohammad Reza; Kheirabadi, Gholam Reza; Khedri, Anahita


    Exact measurement of quality of life (QOL) in schizophrenia patients for evaluation of the patient's deterioration and also to assess the efficacy of therapeutic Interventions has become a daily task, which requires accurate assessment tools. This study was aimed to assess the psychometric properties of a Persian version of schizophrenia QOL scale (SQLS) as a common transcultural instrument. One hundred and fifty schizophrenia patients who referred to Psychiatric Clinic in Noor Hospital (Isfahan, Iran) have been selected using simple sampling method. Aside with SQLS, short form-36 general health (SF-36) and World Health Organization QOL-brief-26 (WHOQOL-BREF-26). Questionnaires were completed by the cases for determination of correlation coefficients. The data were analyzed using descriptive statistics, factor analysis, Cronbach's coefficient alpha, Pearson correlation coefficient by Statistical Package for Social Sciences software, version 18 (SPSS-18). Total reliability of the questionnaire was reported by using Cronbach's coefficient alpha 0.84, reliability of individual relationships subscales was 0.91, signs 0/87, symptoms 0/72 and motivation/energy 0/61. Correlation coefficients of SF-36 with a total scale of SQLS and correlation coefficient of WHOQOL-BREF-26 with a total scale of SQLS were acceptable. Exploratory factor analysis using varimax rotation identified four principle components (interpersonal relationship, symptoms, signs, motivation, and energy), which will determine QOL at 52.7% variance. Persian version of the SQLS can be used as a simple, reliable and valid tool in Iranian population.

  11. Practical applications of age-dependent reliability models and analysis of operational data

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L


    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  12. Reliability analysis of shallow foundations by means of limit analysis with random slip lines

    Pula, Wojciech; Chwała, Marcin


    In order to evaluate credible reliability measures when bearing capacity of a shallow foundation is considered it is reasonable to describe soil strength properties in terms of random field's theory. As a next step the selected random field can be spatially averaged by means of a procedure introduced by Vanmarcke (1977). Earlier experiences have proved that, without applying spatial averaging procedure, reliability computations carried out in the context of foundation's bearing capacity had given significantly small values of reliability indices (large values of failure's probability) even for foundations which were considered as relatively safe. On the other hand the volume of the area under averaging strongly affects results of reliability computations. Hence the selection of the averaged area constitutes a vital problem and has to be dependent on the failure mechanism under consideration. In the present study local averages associated with kinematically admissible mechanism of failure proposed by Prandtl (1920) are considered. Soil strength parameters are assumed to constitute anisotropic random fields with different values of vertical and horizontal fluctuation scales. These fields are subjected to averaging along potential slip lines within the mechanism under consideration. Due to random fluctuations of the angle of internal friction the location of a slip line is changeable. Therefore it was necessary to solve the problem of spatial averaging of the random field along the varying slip lines. In order to incorporate an anisotropy of soil properties random fields the vertical correlation length was assumed to significantly shorter than the horizontal one. Finally, reliability indices were evaluated for foundations of various width by means of the Monte Carlo simulation. By numerical examples it is demonstrated that for reasonable proportions (from practical viewpoint) between horizontal and vertical fluctuation scales the reliability indices resulting in two

  13. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian


    Reliability analysis of fiber-reinforced composite structures is a relatively unexplored field, and it is therefore expected that engineers and researchers trying to apply such an approach will meet certain challenges until more knowledge is accumulated. While doing the analyses included...... in the present paper, the authors have experienced some of the possible pitfalls on the way to complete a precise and robust reliability analysis for layered composites. Results showed that in order to obtain accurate reliability estimates it is necessary to account for the various failure modes described...... by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  14. ARCIMBOLDO_LITE: single-workstation implementation and use.

    Sammito, Massimo; Millán, Claudia; Frieske, Dawid; Rodríguez-Freire, Eloy; Borges, Rafael J; Usón, Isabel


    ARCIMBOLDO solves the phase problem at resolutions of around 2 Å or better through massive combination of small fragments and density modification. For complex structures, this imposes a need for a powerful grid where calculations can be distributed, but for structures with up to 200 amino acids in the asymmetric unit a single workstation may suffice. The use and performance of the single-workstation implementation, ARCIMBOLDO_LITE, on a pool of test structures with 40-120 amino acids and resolutions between 0.54 and 2.2 Å is described. Inbuilt polyalanine helices and iron cofactors are used as search fragments. ARCIMBOLDO_BORGES can also run on a single workstation to solve structures in this test set using precomputed libraries of local folds. The results of this study have been incorporated into an automated, resolution- and hardware-dependent parameterization. ARCIMBOLDO has been thoroughly rewritten and three binaries are now available: ARCIMBOLDO_LITE, ARCIMBOLDO_SHREDDER and ARCIMBOLDO_BORGES. The programs and libraries can be downloaded from

  15. Energy-efficiency based classification of the manufacturing workstation

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.


    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  16. Multiseat workstations Estaciones de trabajo multi-asiento

    Yunier Soler Franco


    Full Text Available With the current development of hardware devices, modern operating systems are able to simultaneously execute a great number of operations without losing performance. However, most of the system resources will remain idle. That remaining percent could be used by other users, allowing increasing system's efficiency. This shared use can be achieved by implementing a “multi-seat configuration”. This paper describes the procedures needed in order to turn a conventional workstation into a multi-seat workstation; it covers also the main existent solutions and the integration of the Nova operating system with this kind of workstations.Con el desarrollo actual de los dispositivos de hardware los sistemas operativos modernos ejecutan simultáneamente un gran número de operaciones sin disminuir su rendimiento. Sin embargo, la inmensa mayoría de los recursos del sistema permanecen inactivos y el por ciento restante podría ser aprovechado por otros usuarios, permitiendo incrementar la eficiencia del sistema. Este uso compartido puede lograrse implementando una “configuración multi-asiento”. El presente trabajo describe los procedimientos necesarios para convertir una estación de trabajo convencional en una estación multi-asiento, recoge también las principales soluciones existentes y la integración del sistema operativo Nova con este tipo puestos de trabajo.

  17. Estaciones de trabajo multi-asiento Multiseat workstations

    Yunier Soler Franco


    Full Text Available Con el desarrollo actual de los dispositivos de hardware los sistemas operativos modernos ejecutan simultáneamente un gran número de operaciones sin disminuir su rendimiento. Sin embargo, la inmensa mayoría de los recursos del sistema permanecen inactivos y el por ciento restante podría ser aprovechado por otros usuarios, permitiendo incrementar la eficiencia del sistema. Este uso compartido puede lograrse implementando una “configuración multi-asiento”. El presente trabajo describe los procedimientos necesarios para convertir una estación de trabajo convencional en una estación multi-asiento, recoge también las principales soluciones existentes y la integración del sistema operativo Nova con este tipo puestos de trabajo.With the current development of hardware devices, modern operating systems are able to simultaneously execute a great number of operations without losing performance. However, most of the system resources will remain idle. That remaining percent could be used by other users, allowing increasing system's efficiency. This shared use can be achieved by implementing a “multi-seat configuration”. This paper describes the procedures needed in order to turn a conventional workstation into a multi-seat workstation; it covers also the main existent solutions and the integration of the Nova operating system with this kind of workstations.

  18. Spatial reliability analysis of a wind turbine blade cross section subjected to multi-axial extreme loading

    Dimitrov, Nikolay Krasimirov; Bitsche, Robert; Blasques, José Pedro Albergaria Amaral


    properties, progressive material failure, and system reliability effects. An example analysis of reliability against material failure is demonstrated for a blade cross section. Based on the study we discuss the implications of using a system reliability approach, the effect of spatial correlation length......This paper presents a methodology for structural reliability analysis of wind turbine blades. The study introduces several novel elements by taking into account loading direction using a multiaxial probabilistic load model, considering random material strength, spatial correlation between material......, type of material degradation algorithm, and reliability methods on the system failure probability, as well as the main factors that have an influence on the reliability. (C) 2017 Elsevier Ltd. All rights reserved....

  19. A New Method for System Reliability Analysis of Tailings Dam Stability

    Liu, X.; Tang, H.; Xiong, C.; Ni, W.


    For the purpose of stability evaluation, a tailings dam can be considered as an artificial slope made of special soil materials which mainly come from mine tailings. As a particular engineering project, a tailings dam generally has experienced multi-loop hydraulic sediments as well as a long-term consolidation in the process of construction. The characteristics of sediment and consolidation result in a unique distribution of the soil layers with significant uncertainties, which come from both nature development and various human activities, and thus cause the discrete and the variability of the physical-mechanical properties dramatically greater than the natural geo-materials. Therefore, the location of critical slip surface (CSS) of the dam usually presents a notable drift. So, it means that the reliability evaluation task for a tailings dam is a system reliability problem indeed. Unfortunately, the previous research of reliability of tailings dam was mainly confined to the limit equilibrium method (LEM), which has three obvious drawbacks. First, it just focused on the variability along the slip surface rather than the whole space of the dam. Second, a fixed CSS, instead of variable one, was concerned in most cases. Third, the shape of the CSS was usually simplified to a circular. The present paper tried to construct a new reliability analysis model combined with several advanced techniques involving finite difference method (FDM), Monte Carlo simulation (MCS), support vector machine (SVM) and particle swarm optimization (PSO). The new framework was consisted of four modules. The first one is the limit equilibrium finite difference mode, which employed the FLAC3D code to generate stress fields and then used PSO algorithm to search the location of CSS and corresponding minimum factor of safety (FOS). The most value of this module was that each realization of stress field would lead to a particular CSS and its FOS. In other words, the consideration of the drift of

  20. Reliability Analysis of Ice-Induced Fatigue and Damage in Offshore Engineering Structures


    - In Bohai Gulf, offshore and other installations have collapsed by sea ice due to the fatigue and fracture of the main supporting components in the ice environments. In this paper presented are some results on fatigue reliability of these structures in the Gulf by investigating the distributions of ice parameters such as its floating direction and speed, sheet thickness, compressive strength, ice forces on the structures, and hot spot stress in the structure. The low temperature, ice breaking modes and component fatigue failure modes are also taken into account in the analysis of the fatigue reliability of the offshore structures experiencing both random ice loading and low temperatures. The results could be applied to the design and operation of offshore platforms in the Bohai Gulf.