WorldWideScience

Sample records for time-variant reliability applications

  1. Time-variant reliability assessment through equivalent stochastic process transformation

    International Nuclear Information System (INIS)

    Wang, Zequn; Chen, Wei

    2016-01-01

    Time-variant reliability measures the probability that an engineering system successfully performs intended functions over a certain period of time under various sources of uncertainty. In practice, it is computationally prohibitive to propagate uncertainty in time-variant reliability assessment based on expensive or complex numerical models. This paper presents an equivalent stochastic process transformation approach for cost-effective prediction of reliability deterioration over the life cycle of an engineering system. To reduce the high dimensionality, a time-independent reliability model is developed by translating random processes and time parameters into random parameters in order to equivalently cover all potential failures that may occur during the time interval of interest. With the time-independent reliability model, an instantaneous failure surface is attained by using a Kriging-based surrogate model to identify all potential failure events. To enhance the efficacy of failure surface identification, a maximum confidence enhancement method is utilized to update the Kriging model sequentially. Then, the time-variant reliability is approximated using Monte Carlo simulations of the Kriging model where system failures over a time interval are predicted by the instantaneous failure surface. The results of two case studies demonstrate that the proposed approach is able to accurately predict the time evolution of system reliability while requiring much less computational efforts compared with the existing analytical approach. - Highlights: • Developed a new approach for time-variant reliability analysis. • Proposed a novel stochastic process transformation procedure to reduce the dimensionality. • Employed Kriging models with confidence-based adaptive sampling scheme to enhance computational efficiency. • The approach is effective for handling random process in time-variant reliability analysis. • Two case studies are used to demonstrate the efficacy

  2. A new approach for reliability analysis with time-variant performance characteristics

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2013-01-01

    Reliability represents safety level in industry practice and may variant due to time-variant operation condition and components deterioration throughout a product life-cycle. Thus, the capability to perform time-variant reliability analysis is of vital importance in practical engineering applications. This paper presents a new approach, referred to as nested extreme response surface (NERS), that can efficiently tackle time dependency issue in time-variant reliability analysis and enable to solve such problem by easily integrating with advanced time-independent tools. The key of the NERS approach is to build a nested response surface of time corresponding to the extreme value of the limit state function by employing Kriging model. To obtain the data for the Kriging model, the efficient global optimization technique is integrated with the NERS to extract the extreme time responses of the limit state function for any given system input. An adaptive response prediction and model maturation mechanism is developed based on mean square error (MSE) to concurrently improve the accuracy and computational efficiency of the proposed approach. With the nested response surface of time, the time-variant reliability analysis can be converted into the time-independent reliability analysis and existing advanced reliability analysis methods can be used. Three case studies are used to demonstrate the efficiency and accuracy of NERS approach

  3. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    Science.gov (United States)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  4. Eco-reliable path finding in time-variant and stochastic networks

    International Nuclear Information System (INIS)

    Li, Wenjie; Yang, Lixing; Wang, Li; Zhou, Xuesong; Liu, Ronghui; Gao, Ziyou

    2017-01-01

    This paper addresses a route guidance problem for finding the most eco-reliable path in time-variant and stochastic networks such that travelers can arrive at the destination with the maximum on-time probability while meeting vehicle emission standards imposed by government regulators. To characterize the dynamics and randomness of transportation networks, the link travel times and emissions are assumed to be time-variant random variables correlated over the entire network. A 0–1 integer mathematical programming model is formulated to minimize the probability of late arrival by simultaneously considering the least expected emission constraint. Using the Lagrangian relaxation approach, the primal model is relaxed into a dualized model which is further decomposed into two simple sub-problems. A sub-gradient method is developed to reduce gaps between upper and lower bounds. Three sets of numerical experiments are tested to demonstrate the efficiency and performance of our proposed model and algorithm. - Highlights: • The most eco-reliable path is defined in time-variant and stochastic networks. • The model is developed with on-time arrival probability and emission constraints. • The sub-gradient and label correcting algorithm are integrated to solve the model. • Numerical experiments demonstrate the effectiveness of developed approaches.

  5. Relevance of control theory to design and maintenance problems in time-variant reliability: The case of stochastic viability

    International Nuclear Information System (INIS)

    Rougé, Charles; Mathias, Jean-Denis; Deffuant, Guillaume

    2014-01-01

    The goal of this paper is twofold: (1) to show that time-variant reliability and a branch of control theory called stochastic viability address similar problems with different points of view, and (2) to demonstrate the relevance of concepts and methods from stochastic viability in reliability problems. On the one hand, reliability aims at evaluating the probability of failure of a system subjected to uncertainty and stochasticity. On the other hand, viability aims at maintaining a controlled dynamical system within a survival set. When the dynamical system is stochastic, this work shows that a viability problem belongs to a specific class of design and maintenance problems in time-variant reliability. Dynamic programming, which is used for solving Markovian stochastic viability problems, then yields the set of design states for which there exists a maintenance strategy which guarantees reliability with a confidence level β for a given period of time T. Besides, it leads to a straightforward computation of the date of the first outcrossing, informing on when the system is most likely to fail. We illustrate this approach with a simple example of population dynamics, including a case where load increases with time. - Highlights: • Time-variant reliability tools cannot devise complex maintenance strategies. • Stochastic viability is a control theory that computes a probability of failure. • Some design and maintenance problems are stochastic viability problems. • Used in viability, dynamic programming can find reliable maintenance actions. • Confronting reliability and control theories such as viability is promising

  6. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2018-03-01

    Full Text Available In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion.

  7. Time-variant flexural reliability of RC beams with externally bonded CFRP under combined fatigue-corrosion actions

    International Nuclear Information System (INIS)

    Bigaud, David; Ali, Osama

    2014-01-01

    Time-variant reliability analysis of RC highway bridges strengthened with carbon fibre reinforced polymer CFRP laminates under four possible competing damage modes (concrete crushing, steel rupture after yielding, CFRP rupture and FRP plate debonding) and three degradation factors is analyzed in terms of reliability index β using FORM. The first degradation factor is chloride-attack corrosion which induces reduction in steel area and concrete cover cracking at characteristic key times (corrosion initiation, severe surface cover cracking). The second degradation factor considered is fatigue which leads to damage in concrete and steel rebar. Interaction between corrosion and fatigue crack growth in steel reinforcing bars is implemented. The third degradation phenomenon is the CFRP properties deterioration due to aging. Considering these three degradation factors, the time-dependent flexural reliability profile of a typical simple 15 m-span intermediate girder of a RC highway bridge is constructed under various traffic volumes and under different corrosion environments. The bridge design options follow AASHTO-LRFD specifications. Results of the study have shown that the reliability is very sensitive to factors governing the corrosion. Concrete damage due to fatigue slightly affects reliability profile of non-strengthened section, while service life after strengthening is strongly related to fatigue damage in concrete. - Highlights: • We propose a method to follow the time-variant reliability of strengthened RC beams. • We consider multiple competing failure modes of CFRP strengthened RC beams. • We consider combined degradation mechanisms (corrosion, fatigue, ageing of CFRP)

  8. Mechanical reliability of structures subjected to time-variant physical phenomena

    International Nuclear Information System (INIS)

    Lemaire, Celine

    1999-01-01

    This work deals with two-phase critical flows in order to improve the way to dimension safety systems. It brings a numerical, physical and experimental contribution. We emphasized the importance to validate separately the numerical method and the physical model. Reference numerical solutions, assimilated to quasi-analytical solutions, were elaborated for a stationary one-dimensional restriction. They allowed to validate in space non stationary numerical schemes converged in time and constitute space convergence indicator (2 schemes validated). With this reliable numerical solution, we studied the physical model. The potential of a particular existing dispersed flow model has been validated thanks to experimental data. The validity domain of such a model is inevitably reduced. During this study, particular behaviors have been exhibited like the pseudo-critical nature of flow with a relaxation process, the non characteristic properties/nature of critical parameters where disequilibrium is largely reduced or the predominance of pressure due to interfacial transfers. The multidimensional aspect has been studied. A data base included local parameters corresponding to a simplify geometry has been constituted. The flow impact on the disk has been characterized and multidimensional effects identified. These effects form an additional step to the validation of multidimensional physical models. (author) [fr

  9. New approaches for the reliability-oriented structural optimization considering time-variant aspects; Neue Ansaetze fuer die zuverlaessigkeitsorientierte Strukturoptimierung unter Beachtung zeitvarianter Aspekte

    Energy Technology Data Exchange (ETDEWEB)

    Kuschel, N.

    2000-07-01

    The optimization of structures with respect to cost, weight or performance is a well-known application of the nonlinear optimization. However reliability-based structural optimization has been subject of only very few studies. The approaches suggested up to now have been unsatisfactory regarding general possibility of application or easy handling by user. The objective of this thesis is the development of general approaches to solve both optimization problems, the minimization of cost with respect to constraint reliabilty and the maximization of reliability under cost constraint. The extented approach of an one-level-method will be introduced in detail for the time-invariant problems. Here, the reliability of the sturcture will be analysed in the framework of the First-Order-Reliability-Method (FORM). The use of time-variant reliability analysis is necessary for a realistic modelling of many practical problems. Therefore several generalizations of the new approaches will be derived for the time-variant reliability-based structural optimization. Some important properties of the optimization problems are proved. In addition some interesting extensions of the one-level-method, for example the cost optimization of structural series systems and the cost optimization in the frame of the Second-Order-Reliabiity-Method (SORM), are presented in the thesis. (orig.) [German] Die Optimierung von Tragwerken im Hinblick auf die Kosten, das Gewicht oder die Gestalt ist eine sehr bekannte Anwendung der nichtlinearen Optimierung. Die zuverlaessigkeitsorientierte Strukturoptimierung wurde dagegen weit seltener untersucht. Die bisher vorgeschlagenen Ansaetze koennen bezueglich ihrer allgemeinen Verwendbarkeit oder ihrer nutzerfreundlichen Handhabung nicht befriedigen. Das Ziel der vorliegenden Arbeit ist nun die Entwicklung allgemeiner Ansaetze zur Loesung der beiden Optimierungsprobleme, einer Kostenminimierung unter Zuverlaessigkeitsrestriktionen und einer

  10. Adaptive lattice decision-feedback equalizers - Their performance and application to time-variant multipath channnels

    Science.gov (United States)

    Ling, F.; Proakis, J. G.

    1985-04-01

    This paper presents two types of adaptive lattice decision-feedback equalizers (DFE), the least squares (LS) lattice DFE and the gradient lattice DFE. Their performance has been investigated on both time-invariant and time-variant channels through computer simulations and compared to other kinds of equalizers. An analysis of the self-noise and tracking characteristics of the LS DFE and the DFE employing the Widrow-Hoff least mean square adaptive algorithm (LMS DFE) are also given. The analysis and simulation results show that the LS lattice DFE has the faster initial convergence rate, while the gradient lattice DFE is computationally more efficient. The main advantages of the lattice DFE's are their numerical stability, their computational efficiency, the flexibility to change their length, and their excellent capabilities for tracking rapidly time-variant channels.

  11. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  12. Time variant layer control in atmospheric pressure chemical vapor deposition based growth of graphene

    KAUST Repository

    Qaisi, Ramy M.; Smith, Casey; Hussain, Muhammad Mustafa

    2013-01-01

    Graphene is a semi-metallic, transparent, atomic crystal structure material which is promising for its high mobility, strength and transparency - potentially applicable for radio frequency (RF) circuitry and energy harvesting and storage applications. Uniform (same number of layers), continuous (not torn or discontinuous), large area (100 mm to 200 mm wafer scale), low-cost, reliable growth are the first hand challenges for its commercialization prospect. We show a time variant uniform (layer control) growth of bi- to multi-layer graphene using atmospheric chemical vapor deposition system. We use Raman spectroscopy for physical characterization supported by electrical property analysis. © 2013 IEEE.

  13. Time variant layer control in atmospheric pressure chemical vapor deposition based growth of graphene

    KAUST Repository

    Qaisi, Ramy M.

    2013-04-01

    Graphene is a semi-metallic, transparent, atomic crystal structure material which is promising for its high mobility, strength and transparency - potentially applicable for radio frequency (RF) circuitry and energy harvesting and storage applications. Uniform (same number of layers), continuous (not torn or discontinuous), large area (100 mm to 200 mm wafer scale), low-cost, reliable growth are the first hand challenges for its commercialization prospect. We show a time variant uniform (layer control) growth of bi- to multi-layer graphene using atmospheric chemical vapor deposition system. We use Raman spectroscopy for physical characterization supported by electrical property analysis. © 2013 IEEE.

  14. Performance comparison of various time variant filters

    Energy Technology Data Exchange (ETDEWEB)

    Kuwata, M [JEOL Engineering Co. Ltd., Akishima, Tokyo (Japan); Husimi, K

    1996-07-01

    This paper describes the advantage of the trapezoidal filter used in semiconductor detector system comparing with the other time variant filters. The trapezoidal filter is the compose of a rectangular pre-filter and a gated integrator. We indicate that the best performance is obtained by the differential-integral summing type rectangular pre-filter. This filter is not only superior in performance, but also has the useful feature that the rising edge of the output waveform is linear. We introduce an example of this feature used in a high-energy experiment. (author)

  15. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  16. Fundamentals and applications of systems reliability analysis

    International Nuclear Information System (INIS)

    Boesebeck, K.; Heuser, F.W.; Kotthoff, K.

    1976-01-01

    The lecture gives a survey on the application of methods of reliability analysis to assess the safety of nuclear power plants. Possible statements of reliability analysis in connection with specifications of the atomic licensing procedure are especially dealt with. Existing specifications of safety criteria are additionally discussed with the help of reliability analysis by the example of the reliability analysis of a reactor protection system. Beyond the limited application to single safety systems, the significance of reliability analysis for a closed risk concept is explained in the last part of the lecture. (orig./LH) [de

  17. On industrial application of structural reliability theory

    Energy Technology Data Exchange (ETDEWEB)

    Thoft-Christensen, P

    1998-06-01

    In this paper it is shown that modern structural reliability theory is being successfully applied to a number of different industries. This review of papers is in no way complete. In the literature there is a large number of similar applications and also application not touched on in this presentation. There has been some concern among scientists from this area that structural reliability theory is not being used by industry. It is probably correct that structural reliability theory is not being used by industry as much as it should be used. However, the work by the ESReDA Working Group clearly shows the vary wide application of structural reliability theory by many different industries. One must also have in mind that industry often is reluctant to publish data related to safety and reliability. (au) 32 refs.

  18. On industrial application of structural reliability theory

    International Nuclear Information System (INIS)

    Thoft-Christensen, P.

    1998-01-01

    In this paper it is shown that modern structural reliability theory is being successfully applied to a number of different industries. This review of papers is in no way complete. In the literature there is a large number of similar applications and also application not touched on in this presentation. There has been some concern among scientists from this area that structural reliability theory is not being used by industry. It is probably correct that structural reliability theory is not being used by industry as much as it should be used. However, the work by the ESReDA Working Group clearly shows the vary wide application of structural reliability theory by many different industries. One must also have in mind that industry often is reluctant to publish data related to safety and reliability. (au)

  19. On Industrial Application of Structural Reliability Theory

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    For the last two decades we have seen an increasing interest in applying structural reliability theory to many different industries. However, the number of real applications is much smaller than what one would expect. At the beginning most applications were in the design/analyses area especially...

  20. Quality and reliability management and its applications

    CERN Document Server

    2016-01-01

    Integrating development processes, policies, and reliability predictions from the beginning of the product development lifecycle to ensure high levels of product performance and safety, this book helps companies overcome the challenges posed by increasingly complex systems in today’s competitive marketplace.   Examining both research on and practical aspects of product quality and reliability management with an emphasis on applications, the book features contributions written by active researchers and/or experienced practitioners in the field, so as to effectively bridge the gap between theory and practice and address new research challenges in reliability and quality management in practice.    Postgraduates, researchers and practitioners in the areas of reliability engineering and management, amongst others, will find the book to offer a state-of-the-art survey of quality and reliability management and practices.

  1. Application of Reliability in Breakwater Design

    DEFF Research Database (Denmark)

    Christiani, Erik

    methods to design certain types of breakwaters. Reliability analyses of the main armour and toe berm interaction is exemplified to show the effect of a multiple set of failure mechanisms. First the limit state equations of the main armour and toe interaction are derived from laboratory tests performed...... response, but in one area information has been lacking; bearing capacity has not been treated in depth in a probabilistic manner for breakwaters. Reliability analysis of conventional rubble mound breakwaters and conventional vertical breakwaters is exemplified for the purpose of establishing new ways...... by Bologna University. Thereafter a multiple system of failure for the interaction is established. Relevant stochastic parameters are characterized prior to the reliability evaluation. Application of reliability in crown wall design is illustrated by deriving relevant single foundation failure modes...

  2. Time-variant random interval natural frequency analysis of structures

    Science.gov (United States)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  3. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  4. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  5. Software reliability for safety-critical applications

    International Nuclear Information System (INIS)

    Everett, B.; Musa, J.

    1994-01-01

    In this talk, the authors address the question open-quotes Can Software Reliability Engineering measurement and modeling techniques be applied to safety-critical applications?close quotes Quantitative techniques have long been applied in engineering hardware components of safety-critical applications. The authors have seen a growing acceptance and use of quantitative techniques in engineering software systems but a continuing reluctance in using such techniques in safety-critical applications. The general case posed against using quantitative techniques for software components runs along the following lines: safety-critical applications should be engineered such that catastrophic failures occur less frequently than one in a billion hours of operation; current software measurement/modeling techniques rely on using failure history data collected during testing; one would have to accumulate over a billion operational hours to verify failure rate objectives of about one per billion hours

  6. Reliability theory with applications to preventive maintenance

    CERN Document Server

    Gertsbakh, Ilya

    2000-01-01

    The material in this book was first presented as a one-semester course in Relia­ bility Theory and Preventive Maintenance for M.Sc. students of the Industrial Engineering Department of Ben Gurion University in the 1997/98 and 1998/99 academic years. Engineering students are mainly interested in the applied part of this theory. The value of preventive maintenance theory lies in the possibility of its imple­ mentation, which crucially depends on how we handle statistical reliability data. The very nature of the object of reliability theory - system lifetime - makes it extremely difficult to collect large amounts of data. The data available are usu­ ally incomplete, e.g. heavily censored. Thus, the desire to make the course material more applicable led me to include in the course topics such as mod­ eling system lifetime distributions (Chaps. 1,2) and the maximum likelihood techniques for lifetime data processing (Chap. 3). A course in the theory of statistics is aprerequisite for these lectures. Stan­ dard...

  7. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  8. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  9. Probabilistic simulation applications to reliability assessments

    International Nuclear Information System (INIS)

    Miller, Ian; Nutt, Mark W.; Hill, Ralph S. III

    2003-01-01

    Probabilistic risk/reliability (PRA) analyses for engineered systems are conventionally based on fault-tree methods. These methods are mature and efficient, and are well suited to systems consisting of interacting components with known, low probabilities of failure. Even complex systems, such as nuclear power plants or aircraft, are modeled by the careful application of these approaches. However, for systems that may evolve in complex and nonlinear ways, and where the performance of components may be a sensitive function of the history of their working environments, fault-tree methods can be very demanding. This paper proposes an alternative method of evaluating such systems, based on probabilistic simulation using intelligent software objects to represent the components of such systems. Using a Monte Carlo approach, simulation models can be constructed from relatively simple interacting objects that capture the essential behavior of the components that they represent. Such models are capable of reflecting the complex behaviors of the systems that they represent in a natural and realistic way. (author)

  10. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  11. NASA Applications and Lessons Learned in Reliability Engineering

    Science.gov (United States)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  12. Design for ASIC reliability for low-temperature applications

    Science.gov (United States)

    Chen, Yuan; Mojaradi, Mohammad; Westergard, Lynett; Billman, Curtis; Cozy, Scott; Burke, Gary; Kolawa, Elizabeth

    2005-01-01

    In this paper, we present a methodology to design for reliability for low temperature applications without requiring process improvement. The developed hot carrier aging lifetime projection model takes into account both the transistor substrate current profile and temperature profile to determine the minimum transistor size needed in order to meet reliability requirements. The methodology is applicable for automotive, military, and space applications, where there can be varying temperature ranges. A case study utilizing this methodology is given to design for reliability into a custom application-specific integrated circuit (ASIC) for a Mars exploration mission.

  13. Reliability implications for commercial Plowshare applications

    Energy Technology Data Exchange (ETDEWEB)

    Brumleve, T D [Plowshare Systems Research Division, Sandia Laboratories, Livermore, CA (United States)

    1970-05-15

    Based on the premise that there will always be a finite chance of a Plowshare project failure, the implications of such a failure are examined. It is suggested that the optimum reliability level will not necessarily be the highest attainable, but rather that which results in minimum average project cost. The type of performance guarantee that the U. S. should provide for nuclear explosive services, the determination of nuclear yield, courses of action to take in the event of failure, and methods to offset remedial costs are discussed. (author)

  14. Reliability implications for commercial Plowshare applications

    International Nuclear Information System (INIS)

    Brumleve, T.D.

    1970-01-01

    Based on the premise that there will always be a finite chance of a Plowshare project failure, the implications of such a failure are examined. It is suggested that the optimum reliability level will not necessarily be the highest attainable, but rather that which results in minimum average project cost. The type of performance guarantee that the U. S. should provide for nuclear explosive services, the determination of nuclear yield, courses of action to take in the event of failure, and methods to offset remedial costs are discussed. (author)

  15. PSA applications and piping reliability analysis: where do we stand?

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    1997-01-01

    This reviews a recently proposed framework for piping reliability analysis. The framework was developed to promote critical interpretations of operational data on pipe failures, and to support application-specific-parameter estimation

  16. Fundamentals of reliability engineering applications in multistage interconnection networks

    CERN Document Server

    Gunawan, Indra

    2014-01-01

    This book presents fundamentals of reliability engineering with its applications in evaluating reliability of multistage interconnection networks. In the first part of the book, it introduces the concept of reliability engineering, elements of probability theory, probability distributions, availability and data analysis.  The second part of the book provides an overview of parallel/distributed computing, network design considerations, and more.  The book covers a comprehensive reliability engineering methods and its practical aspects in the interconnection network systems. Students, engineers, researchers, managers will find this book as a valuable reference source.

  17. Review of Industrial Applications of Structural Reliability Theory

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    For the last two decades we have seen an increasing interest in applying structural reliability theory to many different industries. However, the number of real practical applications is much smaller than what one would expect.......For the last two decades we have seen an increasing interest in applying structural reliability theory to many different industries. However, the number of real practical applications is much smaller than what one would expect....

  18. A new measurement of workload in Web application reliability assessment

    Directory of Open Access Journals (Sweden)

    CUI Xia

    2015-02-01

    Full Text Available Web application has been popular in various fields of social life.It becomes more and more important to study the reliability of Web application.In this paper the definition of Web application failure is firstly brought out,and then the definition of Web application reliability.By analyzing data in the IIS server logs and selecting corresponding usage and information delivery failure data,the paper study the feasibility of Web application reliability assessment from the perspective of Web software system based on IIS server logs.Because the usage for a Web site often has certain regularity,a new measurement of workload in Web application reliability assessment is raised.In this method,the unit is removed by weighted average technique;and the weights are assessed by setting objective function and optimization.Finally an experiment was raised for validation.The experiment result shows the assessment of Web application reliability base on the new workload is better.

  19. Conformal prediction for reliable machine learning theory, adaptations and applications

    CERN Document Server

    Balasubramanian, Vineeth; Vovk, Vladimir

    2014-01-01

    The conformal predictions framework is a recent development in machine learning that can associate a reliable measure of confidence with a prediction in any real-world pattern recognition application, including risk-sensitive applications such as medical diagnosis, face recognition, and financial risk prediction. Conformal Predictions for Reliable Machine Learning: Theory, Adaptations and Applications captures the basic theory of the framework, demonstrates how to apply it to real-world problems, and presents several adaptations, including active learning, change detection, and anomaly detecti

  20. Reliability analysis and utilization of PEMs in space application

    Science.gov (United States)

    Jiang, Xiujie; Wang, Zhihua; Sun, Huixian; Chen, Xiaomin; Zhao, Tianlin; Yu, Guanghua; Zhou, Changyi

    2009-11-01

    More and more plastic encapsulated microcircuits (PEMs) are used in space missions to achieve high performance. Since PEMs are designed for use in terrestrial operating conditions, the successful usage of PEMs in space harsh environment is closely related to reliability issues, which should be considered firstly. However, there is no ready-made methodology for PEMs in space applications. This paper discusses the reliability for the usage of PEMs in space. This reliability analysis can be divided into five categories: radiation test, radiation hardness, screening test, reliability calculation and reliability assessment. One case study is also presented to illuminate the details of the process, in which a PEM part is used in a joint space program Double-Star Project between the European Space Agency (ESA) and China. The influence of environmental constrains including radiation, humidity, temperature and mechanics on the PEM part has been considered. Both Double-Star Project satellites are still running well in space now.

  1. A critique of reliability prediction techniques for avionics applications

    Directory of Open Access Journals (Sweden)

    Guru Prasad PANDIAN

    2018-01-01

    Full Text Available Avionics (aeronautics and aerospace industries must rely on components and systems of demonstrated high reliability. For this, handbook-based methods have been traditionally used to design for reliability, develop test plans, and define maintenance requirements and sustainment logistics. However, these methods have been criticized as flawed and leading to inaccurate and misleading results. In its recent report on enhancing defense system reliability, the U.S. National Academy of Sciences has recently discredited these methods, judging the Military Handbook (MIL-HDBK-217 and its progeny as invalid and inaccurate. This paper discusses the issues that arise with the use of handbook-based methods in commercial and military avionics applications. Alternative approaches to reliability design (and its demonstration are also discussed, including similarity analysis, testing, physics-of-failure, and data analytics for prognostics and systems health management.

  2. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  3. The art of progressive censoring applications to reliability and quality

    CERN Document Server

    Balakrishnan, N

    2014-01-01

    This monograph offers a thorough and updated guide to the theory and methods of progressive censoring, an area that has experienced tremendous growth in recent years. Progressive censoring, originally proposed in the 1950s, is an efficient method of handling samples from industrial experiments involving lifetimes of units that have either failed or censored in a progressive fashion during the life test, with many practical applications to reliability and quality. Key topics and features: Data sets from the literature as well as newly simulated data sets are used to illustrate concepts throughout the text Emphasis on real-life applications to life testing, reliability, and quality control Discussion of parametric and nonparametric inference Coverage of experimental design with optimal progressive censoring The Art of Progressive Censoring is a valuable reference for graduate students, researchers, and practitioners in applied statistics, quality control, life testing, and reliability. With its accessible style...

  4. Application of subset simulation in reliability estimation of underground pipelines

    International Nuclear Information System (INIS)

    Tee, Kong Fah; Khan, Lutfor Rahman; Li, Hongshuang

    2014-01-01

    This paper presents a computational framework for implementing an advanced Monte Carlo simulation method, called Subset Simulation (SS) for time-dependent reliability prediction of underground flexible pipelines. The SS can provide better resolution for low failure probability level of rare failure events which are commonly encountered in pipeline engineering applications. Random samples of statistical variables are generated efficiently and used for computing probabilistic reliability model. It gains its efficiency by expressing a small probability event as a product of a sequence of intermediate events with larger conditional probabilities. The efficiency of SS has been demonstrated by numerical studies and attention in this work is devoted to scrutinise the robustness of the SS application in pipe reliability assessment and compared with direct Monte Carlo simulation (MCS) method. Reliability of a buried flexible steel pipe with time-dependent failure modes, namely, corrosion induced deflection, buckling, wall thrust and bending stress has been assessed in this study. The analysis indicates that corrosion induced excessive deflection is the most critical failure event whereas buckling is the least susceptible during the whole service life of the pipe. The study also shows that SS is robust method to estimate the reliability of buried pipelines and it is more efficient than MCS, especially in small failure probability prediction

  5. Distribution System Reliability Analysis for Smart Grid Applications

    Science.gov (United States)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  6. Reliability of capacitors for DC-link applications - An overview

    DEFF Research Database (Denmark)

    Wang, Huai; Blaabjerg, Frede

    2013-01-01

    DC-link capacitors are an important part in the majority of power electronic converters which contribute to cost, size and failure rate on a considerable scale. From capacitor users' viewpoint, this paper presents a review on the improvement of reliability of DC-link in power electronic converters...... from two aspects: 1) reliability-oriented DC-link design solutions; 2) conditioning monitoring of DC-link capacitors during operation. Failure mechanisms, failure modes and lifetime models of capacitors suitable for the applications are also discussed as a basis to understand the physics......-of-failure. This review serves to provide a clear picture of the state-of-the-art research in this area and to identify the corresponding challenges and future research directions for capacitors and their DC-link applications....

  7. Application of STOPP and START criteria: interrater reliability among pharmacists.

    LENUS (Irish Health Repository)

    Ryan, Cristin

    2009-07-01

    Inappropriate prescribing is a well-documented problem in older people. The new screening tools, STOPP (Screening Tool of Older Peoples\\' Prescriptions) and START (Screening Tool to Alert doctors to Right Treatment) have been formulated to identify potentially inappropriate medications (PIMs) and potential errors of omissions (PEOs) in older patients. Consistent, reliable application of STOPP and START is essential for the screening tools to be used effectively by pharmacists.

  8. Highly-reliable laser diodes and modules for spaceborne applications

    Science.gov (United States)

    Deichsel, E.

    2017-11-01

    Laser applications become more and more interesting in contemporary missions such as earth observations or optical communication in space. One of these applications is light detection and ranging (LIDAR), which comprises huge scientific potential in future missions. The Nd:YAG solid-state laser of such a LIDAR system is optically pumped using 808nm emitting pump sources based on semiconductor laser-diodes in quasi-continuous wave (qcw) operation. Therefore reliable and efficient laser diodes with increased output powers are an important requirement for a spaceborne LIDAR-system. In the past, many tests were performed regarding the performance and life-time of such laser-diodes. There were also studies for spaceborne applications, but a test with long operation times at high powers and statistical relevance is pending. Other applications, such as science packages (e.g. Raman-spectroscopy) on planetary rovers require also reliable high-power light sources. Typically fiber-coupled laser diode modules are used for such applications. Besides high reliability and life-time, designs compatible to the harsh environmental conditions must be taken in account. Mechanical loads, such as shock or strong vibration are expected due to take-off or landing procedures. Many temperature cycles with high change rates and differences must be taken in account due to sun-shadow effects in planetary orbits. Cosmic radiation has strong impact on optical components and must also be taken in account. Last, a hermetic sealing must be considered, since vacuum can have disadvantageous effects on optoelectronics components.

  9. Reliability and radiation tolerance of robots for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Lauridsen, K [Risoe National Lab. (Denmark); Decreton, M [SCK.CEN (Belgium); Seifert, C C [Siemens AG (Germany); Sharp, R [AEA Technology (United Kingdom)

    1996-10-01

    The reliability of a robot for nuclear applications will be affected by environmental factors such as dust, water, vibrations, heat, and, in particular, ionising radiation. The present report describes the work carried out in a project addressing the reliability and radiation tolerance of such robots. A widely representative range of components and materials has been radiation tested and the test results have been collated in a database along with data provided by the participants from earlier work and data acquired from other sources. A radiation effects guide has been written for the use by designers of electronic equipment for robots. A generic reliability model has been set up together with generic failure strategies, forming the basis for specific reliability modelling carried out in other projects. Modelling tools have been examined and developed for the prediction of the performance of electronic circuits subjected to radiation. Reports have been produced dealing with the prediction and detection of upcoming failures in electronic systems. Operational experience from the use of robots in radiation work in various contexts has been compiled in a report, and another report has been written on cost/benefit considerations about the use of robots. Also the possible impact of robots on the safety of the surrounding plant has been considered and reported. (au) 16 ills., 236 refs.

  10. Reliability and radiation tolerance of robots for nuclear applications

    International Nuclear Information System (INIS)

    Lauridsen, K.; Decreton, M.; Seifert, C.C.; Sharp, R.

    1996-10-01

    The reliability of a robot for nuclear applications will be affected by environmental factors such as dust, water, vibrations, heat, and, in particular, ionising radiation. The present report describes the work carried out in a project addressing the reliability and radiation tolerance of such robots. A widely representative range of components and materials has been radiation tested and the test results have been collated in a database along with data provided by the participants from earlier work and data acquired from other sources. A radiation effects guide has been written for the use by designers of electronic equipment for robots. A generic reliability model has been set up together with generic failure strategies, forming the basis for specific reliability modelling carried out in other projects. Modelling tools have been examined and developed for the prediction of the performance of electronic circuits subjected to radiation. Reports have been produced dealing with the prediction and detection of upcoming failures in electronic systems. Operational experience from the use of robots in radiation work in various contexts has been compiled in a report, and another report has been written on cost/benefit considerations about the use of robots. Also the possible impact of robots on the safety of the surrounding plant has been considered and reported. (au) 16 ills., 236 refs

  11. Assessment of microelectronics packaging for high temperature, high reliability applications

    Energy Technology Data Exchange (ETDEWEB)

    Uribe, F.

    1997-04-01

    This report details characterization and development activities in electronic packaging for high temperature applications. This project was conducted through a Department of Energy sponsored Cooperative Research and Development Agreement between Sandia National Laboratories and General Motors. Even though the target application of this collaborative effort is an automotive electronic throttle control system which would be located in the engine compartment, results of this work are directly applicable to Sandia`s national security mission. The component count associated with the throttle control dictates the use of high density packaging not offered by conventional surface mount. An enabling packaging technology was selected and thermal models defined which characterized the thermal and mechanical response of the throttle control module. These models were used to optimize thick film multichip module design, characterize the thermal signatures of the electronic components inside the module, and to determine the temperature field and resulting thermal stresses under conditions that may be encountered during the operational life of the throttle control module. Because the need to use unpackaged devices limits the level of testing that can be performed either at the wafer level or as individual dice, an approach to assure a high level of reliability of the unpackaged components was formulated. Component assembly and interconnect technologies were also evaluated and characterized for high temperature applications. Electrical, mechanical and chemical characterizations of enabling die and component attach technologies were performed. Additionally, studies were conducted to assess the performance and reliability of gold and aluminum wire bonding to thick film conductor inks. Kinetic models were developed and validated to estimate wire bond reliability.

  12. Application of modern reliability database techniques to military system data

    International Nuclear Information System (INIS)

    Bunea, Cornel; Mazzuchi, Thomas A.; Sarkani, Shahram; Chang, H.-C.

    2008-01-01

    This paper focuses on analysis techniques of modern reliability databases, with an application to military system data. The analysis of military system data base consists of the following steps: clean the data and perform operation on it in order to obtain good estimators; present simple plots of data; analyze the data with statistical and probabilistic methods. Each step is dealt with separately and the main results are presented. Competing risks theory is advocated as the mathematical support for the analysis. The general framework of competing risks theory is presented together with simple independent and dependent competing risks models available in literature. These models are used to identify the reliability and maintenance indicators required by the operating personnel. Model selection is based on graphical interpretation of plotted data

  13. Towards more accurate and reliable predictions for nuclear applications

    International Nuclear Information System (INIS)

    Goriely, S.

    2015-01-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. In the present contribution, the reliability and accuracy of recent nuclear theories are discussed for most of the relevant quantities needed to estimate reaction cross sections and beta-decay rates, namely nuclear masses, nuclear level densities, gamma-ray strength, fission properties and beta-strength functions. It is shown that nowadays, mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenogical inputs in the prediction of nuclear data. While fundamental nuclear physicists keep on improving state-of-the-art models, e.g. within the shell model or ab initio models, nuclear applications could make use of their most recent results as quantitative constraints or guides to improve the predictions in energy or mass domain that will remain inaccessible experimentally. (orig.)

  14. Adhesives technology for electronic applications materials, processing, reliability

    CERN Document Server

    Licari, James J

    2011-01-01

    Adhesives are widely used in the manufacture and assembly of electronic circuits and products. Generally, electronics design engineers and manufacturing engineers are not well versed in adhesives, while adhesion chemists have a limited knowledge of electronics. This book bridges these knowledge gaps and is useful to both groups. The book includes chapters covering types of adhesive, the chemistry on which they are based, and their properties, applications, processes, specifications, and reliability. Coverage of toxicity, environmental impacts and the regulatory framework make this book par

  15. Application of reliability centered maintenance to Embalse NPP

    International Nuclear Information System (INIS)

    Torres, Antonio; Perdomo, Manuel; Fornero, Damian; Corchera, Roberto

    2010-01-01

    One of the most recent applications of Probabilistic Safety Analysis to Embalse NPP is the Safety Oriented Maintenance Program developed through the Reliability Centered Maintenance (RCM) methodology. Such an application was carried out by a cooperated effort between the staff of nuclear safety department of NPP and experts from Instituto Superior de Tecnologias y Ciencias Aplicadas of Cuba. So far 6 technological systems have been analyzed with important results regarding the optimization of preventive and predictive maintenance program of those systems. Any tasks of RCM were automated via MOSEG code. The results of this study were focused on the elaboration and modification of the Preventive Program, prioritization of stocks, reorientation of predictive techniques and modification in the time parameters of maintenance. (author)

  16. Application of nonparametric statistics to material strength/reliability assessment

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-01-01

    An advanced material technology requires data base on a wide variety of material behavior which need to be established experimentally. It may often happen that experiments are practically limited in terms of reproducibility or a range of test parameters. Statistical methods can be applied to understanding uncertainties in such a quantitative manner as required from the reliability point of view. Statistical assessment involves determinations of a most probable value and the maximum and/or minimum value as one-sided or two-sided confidence limit. A scatter of test data can be approximated by a theoretical distribution only if the goodness of fit satisfies a test criterion. Alternatively, nonparametric statistics (NPS) or distribution-free statistics can be applied. Mathematical procedures by NPS are well established for dealing with most reliability problems. They handle only order statistics of a sample. Mathematical formulas and some applications to engineering assessments are described. They include confidence limits of median, population coverage of sample, required minimum number of a sample, and confidence limits of fracture probability. These applications demonstrate that a nonparametric statistical estimation is useful in logical decision making in the case a large uncertainty exists. (author)

  17. MAI-free performance of PMU-OFDM transceiver in time-variant environment

    Science.gov (United States)

    Tadjpour, Layla; Tsai, Shang-Ho; Kuo, C.-C. J.

    2005-06-01

    An approximately multi-user OFDM transceiver was introduced to reduce the multi-access interference (MAI ) due to the carrier frequency offset (CFO) to a negligible amount via precoding by Tsai, Lin and Kuo. In this work, we investigate the performance of this precoded multi-user (PMU) OFDM system in a time-variant channel environment. We analyze and compare the MAI effect caused by time-variant channels in the PMU-OFDM and the OFDMA systems. Generally speaking, the MAI effect consists of two parts. The first part is due to the loss of orthogonality among subchannels for all users while the second part is due to the CFO effect caused by the Doppler shift. Simulation results show that, although OFDMA outperforms the PMU-OFDM transceiver in a fast time-variant environment without CFO, PMU-OFDM outperforms OFDMA in a slow time-variant channel via the use of M/2 symmetric or anti-symmetric codewords of M Hadamard-Walsh codes.

  18. Exploring Continuity of Care in Patients with Alcohol Use Disorders Using Time-Variant Measures

    NARCIS (Netherlands)

    S.C. de Vries (Sjoerd); A.I. Wierdsma (André)

    2008-01-01

    textabstractBackground/Aims: We used time-variant measures of continuity of care to study fluctuations in long-term treatment use by patients with alcohol-related disorders. Methods: Data on service use were extracted from the Psychiatric Case Register for the Rotterdam Region, The Netherlands.

  19. Advantages and Drawbacks of Applying Periodic Time-Variant Modal Analysis to Spur Gear Dynamics

    DEFF Research Database (Denmark)

    Pedersen, Rune; Santos, Ilmar; Hede, Ivan Arthur

    2010-01-01

    to ensure sufficient accuracy of the results. The method of time-variant modal analysis is applied, and the changes in the fundamental and the parametric resonance frequencies as a function of the rotational speed of the gears, are found. By obtaining the stationary and parametric parts of the time...... of applying the methodology to wind turbine gearboxes are addressed and elucidated....

  20. Practical application of reliability engineering in detailed design and maintenance

    International Nuclear Information System (INIS)

    Barden, S.E.

    1975-01-01

    Modern plant systems are closely coupled combinations of sophisticated and expensive equipment, some important parts of which may be in the development stage (high technology sector), and simpler, crude but not necessarily cheap equipment (low technology sector). Manpower resources involved with such plant systems can also be placed in high and low technology categories (i.e. specialist design and construction staff, and production staff, respectively). Neither can operate effectively without the other, and both are equally important. A sophisticated on-line computer controlling plant or analysing fault symptoms is useless, if not unsafe, if the peripheral sensing and control equipment on plant providing input data is poorly designed and inaccurate, and/or unreliable because of inadequate maintenance. Similarly, the designer can be misled and misinformed, and subsequent design evolution can be wrongly directed, if production recors do not accurately reflect what is actually happening on the plant. The application of Reliability Technology can be counter productive if it demands more effort in the collection of data that it save in facilitating quick, correct engineering decisions, and more accurate assessments of resource requirements. Reliability Engineering techniques must be simplified to made their use widely adopted in the important low technology sector, and established in all financial and contractural procedures associated with design specification and production management. This paper develops this theme with practical examples. (author)

  1. Systems reliability analysis: applications of the SPARCS System-Reliability Assessment Computer Program

    International Nuclear Information System (INIS)

    Locks, M.O.

    1978-01-01

    SPARCS-2 (Simulation Program for Assessing the Reliabilities of Complex Systems, Version 2) is a PL/1 computer program for assessing (establishing interval estimates for) the reliability and the MTBF of a large and complex s-coherent system of any modular configuration. The system can consist of a complex logical assembly of independently failing attribute (binomial-Bernoulli) and time-to-failure (Poisson-exponential) components, without regard to their placement. Alternatively, it can be a configuration of independently failing modules, where each module has either or both attribute and time-to-failure components. SPARCS-2 also has an improved super modularity feature. Modules with minimal-cut unreliabiliy calculations can be mixed with those having minimal-path reliability calculations. All output has been standardized to system reliability or probability of success, regardless of the form in which the input data is presented, and whatever the configuration of modules or elements within modules

  2. Applications of majorization and Schur functions in reliability and life testing

    International Nuclear Information System (INIS)

    Proschan, F.

    1975-01-01

    This is an expository paper presenting basic definitions and properties of majorization and Schur functions, and displaying a variety of applications of these concepts in reliability prediction and modelling, and in reliability inference and life testing

  3. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  4. Joint interval reliability for Markov systems with an application in transmission line reliability

    International Nuclear Information System (INIS)

    Csenki, Attila

    2007-01-01

    We consider Markov reliability models whose finite state space is partitioned into the set of up states U and the set of down states D . Given a collection of k disjoint time intervals I l =[t l ,t l +x l ], l=1,...,k, the joint interval reliability is defined as the probability of the system being in U for all time instances in I 1 union ... union I k . A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively

  5. Joint interval reliability for Markov systems with an application in transmission line reliability

    Energy Technology Data Exchange (ETDEWEB)

    Csenki, Attila [School of Computing and Mathematics, University of Bradford, Bradford, West Yorkshire, BD7 1DP (United Kingdom)]. E-mail: a.csenki@bradford.ac.uk

    2007-06-15

    We consider Markov reliability models whose finite state space is partitioned into the set of up states {sub U} and the set of down states {sub D}. Given a collection of k disjoint time intervals I{sub l}=[t{sub l},t{sub l}+x{sub l}], l=1,...,k, the joint interval reliability is defined as the probability of the system being in {sub U} for all time instances in I{sub 1} union ... union I{sub k}. A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively.

  6. Reliability evaluation of deregulated electric power systems for planning applications

    International Nuclear Information System (INIS)

    Ehsani, A.; Ranjbar, A.M.; Jafari, A.; Fotuhi-Firuzabad, M.

    2008-01-01

    In a deregulated electric power utility industry in which a competitive electricity market can influence system reliability, market risks cannot be ignored. This paper (1) proposes an analytical probabilistic model for reliability evaluation of competitive electricity markets and (2) develops a methodology for incorporating the market reliability problem into HLII reliability studies. A Markov state space diagram is employed to evaluate the market reliability. Since the market is a continuously operated system, the concept of absorbing states is applied to it in order to evaluate the reliability. The market states are identified by using market performance indices and the transition rates are calculated by using historical data. The key point in the proposed method is the concept that the reliability level of a restructured electric power system can be calculated using the availability of the composite power system (HLII) and the reliability of the electricity market. Two case studies are carried out over Roy Billinton Test System (RBTS) to illustrate interesting features of the proposed methodology

  7. Application of a truncated normal failure distribution in reliability testing

    Science.gov (United States)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  8. Advances in methods and applications of reliability and safety analysis

    International Nuclear Information System (INIS)

    Fieandt, J.; Hossi, H.; Laakso, K.; Lyytikaeinen, A.; Niemelae, I.; Pulkkinen, U.; Pulli, T.

    1986-01-01

    The know-how of the reliability and safety design and analysis techniques of Vtt has been established over several years in analyzing the reliability in the Finnish nuclear power plants Loviisa and Olkiluoto. This experience has been later on applied and developed to be used in the process industry, conventional power industry, automation and electronics. VTT develops and transfers methods and tools for reliability and safety analysis to the private and public sectors. The technology transfer takes place in joint development projects with potential users. Several computer-aided methods, such as RELVEC for reliability modelling and analysis, have been developed. The tool developed are today used by major Finnish companies in the fields of automation, nuclear power, shipbuilding and electronics. Development of computer-aided and other methods needed in analysis of operating experience, reliability or safety is further going on in a number of research and development projects

  9. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  10. Application of human reliability analysis methodology of second generation

    International Nuclear Information System (INIS)

    Ruiz S, T. de J.; Nelson E, P. F.

    2009-10-01

    The human reliability analysis (HRA) is a very important part of probabilistic safety analysis. The main contribution of HRA in nuclear power plants is the identification and characterization of the issues that are brought together for an error occurring in the human tasks that occur under normal operation conditions and those made after abnormal event. Additionally, the analysis of various accidents in history, it was found that the human component has been a contributing factor in the cause. Because of need to understand the forms and probability of human error in the 60 decade begins with the collection of generic data that result in the development of the first generation of HRA methodologies. Subsequently develop methods to include in their models additional performance shaping factors and the interaction between them. So by the 90 mid, comes what is considered the second generation methodologies. Among these is the methodology A Technique for Human Event Analysis (ATHEANA). The application of this method in a generic human failure event, it is interesting because it includes in its modeling commission error, the additional deviations quantification to nominal scenario considered in the accident sequence of probabilistic safety analysis and, for this event the dependency actions evaluation. That is, the generic human failure event was required first independent evaluation of the two related human failure events . So the gathering of the new human error probabilities involves the nominal scenario quantification and cases of significant deviations considered by the potential impact on analyzed human failure events. Like probabilistic safety analysis, with the analysis of the sequences were extracted factors more specific with the highest contribution in the human error probabilities. (Author)

  11. Modeling cognition dynamics and its application to human reliability analysis

    International Nuclear Information System (INIS)

    Mosleh, A.; Smidts, C.; Shen, S.H.

    1996-01-01

    For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects

  12. Reliability Oriented Circuit Design For Power Electronics Applications

    DEFF Research Database (Denmark)

    Sintamarean, Nicolae Cristian

    is presented. Chapter 3 presents the electro-thermal model validation and the reliability studies performed by the proposed tool. The chapter ends with a detailed lifetime analysis, which emphasizes the mission-profile variation and gate-driver parameters variation impact on the PV-inverter devices lifetime......Highly reliable components are required in order to minimize the downtime during the lifetime of the converter and implicitly the maintenance costs. Therefore, the design of high reliable converters under constrained reliability and cost is a great challenge to be overcome in the future....... Moreover, the impact of the mission-profile sampling time on the lifetime estimation accuracy is also determined. The second part of the thesis introduced in Chapter 4, presents a novel gate-driver concept which reduces the dependency of the device power losses variations on the device loading variations...

  13. Trial application of reliability technology to emergency diesel generators at the Trojan Nuclear Power Plant

    International Nuclear Information System (INIS)

    Wong, S.M.; Boccio, J.L.; Karimian, S.; Azarm, M.A.; Carbonaro, J.; DeMoss, G.

    1986-01-01

    In this paper, a trial application of reliability technology to the emergency diesel generator system at the Trojan Nuclear Power Plant is presented. An approach for formulating a reliability program plan for this system is being developed. The trial application has shown that a reliability program process, using risk- and reliability-based techniques, can be interwoven into current plant operational activities to help in controlling, analyzing, and predicting faults that can challenge safety systems. With the cooperation of the utility, Portland General Electric Co., this reliability program can eventually be implemented at Trojan to track its effectiveness

  14. Demand Response Application forReliability Enhancement in Electricity Market

    OpenAIRE

    Romera Pérez, Javier

    2015-01-01

    The term reliability is related with the adequacy and security during operation of theelectric power system, supplying the electricity demand over time and saving thepossible contingencies because every inhabitant needs to be supplied with electricity intheir day to day. Operating the system in this way entails spending money. The first partof the project is going to be an analysis of the reliability and the economic impact of it.During the last decade, electric utilities and companies had be...

  15. Noise and signal processing in a microstrip detector with a time variant readout system

    International Nuclear Information System (INIS)

    Cattaneo, P.W.

    1995-01-01

    This paper treats the noise and signal processing by a time variant filter in a microstrip detector. In particular, the noise sources in the detector-electronics chain and the signal losses that cause a substantial decrease of the original signal are thoroughly analyzed. This work has been motivated by the analysis of the data of the microstrip detectors designed for the ALEPH minivertex detector. Hence, even if the discussion will be kept as general as possible, concrete examples will be presented referring to the specific ALEPH design. (orig.)

  16. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  17. Safe and reliable solutions for Internet application in power sector

    International Nuclear Information System (INIS)

    Eichelburg, W. K.

    2004-01-01

    The requirements for communication of various information systems (control systems, EMS, ERP) continually increase. Internet is prevailingly a Universal communication device for interconnection of the distant systems at the present. However, the communication with the outside world is important, the internal system must be protected safely and reliably. The goal of the article is to inform the experienced participants with the verified solutions of the safe and reliable Internet utilization for interconnection of control systems on the superior level, the distant management, the diagnostic and for interconnection of information systems. An added value is represented by the solutions using Internet for image and sound transmission. (author)

  18. F-15 inlet/engine test techniques and distortion methodologies studies. Volume 2: Time variant data quality analysis plots

    Science.gov (United States)

    Stevens, C. H.; Spong, E. D.; Hammock, M. S.

    1978-01-01

    Time variant data quality analysis plots were used to determine if peak distortion data taken from a subscale inlet model can be used to predict peak distortion levels for a full scale flight test vehicle.

  19. Applications of Human Performance Reliability Evaluation Concepts and Demonstration Guidelines

    Science.gov (United States)

    1977-03-15

    ship stops dead in the water and the AN/SQS-26 operator recommends a new heading (000°). At T + 14 minutes, the target ship begins a hard turn to...Various Simulated Conditions 82 9 Hunan Reliability for Each Simulated Operator (Baseline Run) 83 10 Human and Equipment Availabilit / under

  20. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  1. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  2. Commercial Off-The-Shelf (COTS) Electronics Reliability for Space Applications

    Science.gov (United States)

    Pellish, Jonathan

    2018-01-01

    This presentation describes the accelerating use of Commercial off the Shelf (COTS) parts in space applications. Component reliability and threats in the context of the mission, environment, application, and lifetime. Provides overview of traditional approaches applied to COTS parts in flight applications, and shows challenges and potential paths forward for COTS systems in flight applications it's all about data!

  3. Photovoltaic module reliability improvement through application testing and failure analysis

    Science.gov (United States)

    Dumas, L. N.; Shumka, A.

    1982-01-01

    During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.

  4. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  5. Reliability estimates for selected sensors in fusion applications

    International Nuclear Information System (INIS)

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety

  6. Alternative ceramic circuit constructions for low cost, high reliability applications

    International Nuclear Information System (INIS)

    Modes, Ch.; O'Neil, M.

    1997-01-01

    The growth in the use of hybrid circuit technology has recently been challenged by recent advances in low cost laminate technology, as well as the continued integration of functions into IC's. Size reduction of hybrid 'packages' has turned out to be a means to extend the useful life of this technology. The suppliers of thick film materials technology have responded to this challenge by developing a number of technology options to reduce circuit size, increase density, and reduce overall cost, while maintaining or increasing reliability. This paper provides an overview of the processes that have been developed, and, in many cases are used widely to produce low cost, reliable microcircuits. Comparisons of each of these circuit fabrication processes are made with a discussion of advantages and disadvantages of each technology. (author)

  7. Guide for generic application of Reliability Centered Maintenance (RCM) recommendations

    International Nuclear Information System (INIS)

    Schwan, C.A.; Toomey, G.E.; Morgan, T.A.; Darling, S.S.

    1991-02-01

    Previously completed reliability centered maintenance (RCM) studies form the basis for developing or refining a preventive maintenance program. This report describes a generic methodology that will help utilities optimize nuclear plant maintenance programs using RCM techniques. This guide addresses the following areas: history of the generic methodology development process, and use of the generic methodology for conducting system-to-system and component-to-component evaluations. 2 refs., 2 figs., 5 tabs

  8. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  9. Innovation and reliability of atomic standards for PTTI applications

    Science.gov (United States)

    Kern, R.

    1981-01-01

    Innovation and reliability in hyperfine frequency standards and clock systems are discussed. Hyperfine standards are defined as those precision frequency sources and clocks which use a hyperfine atomic transition for frequency control and which have realized significant commercial production and acceptance (cesium, hydrogen, and rubidium atoms). References to other systems such as thallium and ammonia are excluded since these atomic standards have not been commercially exploited in this country.

  10. Imaging a Time-variant Earthquake Focal Region along an Interplate Boundary

    Science.gov (United States)

    Tsuruga, K.; Kasahara, J.; Hasada, Y.; Fujii, N.

    2010-12-01

    We show a preliminary result of a trial for detecting a time-variant earthquake focal region along an interplate boundary by means of a new imaging method through a numerical simulation. Remarkable seismic reflections from the interplate boundaries of a subducting oceanic plate have been observed in Japan Trench (Mochizuki et al, 2005) and in Nankai Trough (Iidaka et al., 2003). Those strong seismic reflection existing in the current aseismic zones suggest the existence of fluid along the subduction boundary, and it is considered that they closely relate to a future huge earthquake. Seismic ACROSS has a potential to monitor some changes of transfer function along the propagating ray paths, by using an accurately-controlled transmission and receiving of the steady continuous signals repeatedly (Kumazawa et al., 2000). If the physical state in a focal region along the interplate would be changed enough in the time and space, for instance, by increasing or decreasing of fluid flow, we could detect some differences of the amplitude and/or travel-time of the particular reflection phases from the time-variant target region. In this study, we first investigated the seismic characteristics of seismograms and their differences before and after the change of a target region through a numerical simulation. Then, as one of the trials, we attempted to make an image of such time-variant target region by applying a finite-difference back-propagation technique in the time and space to the differences of waveforms (after Kasahara et al., 2010). We here used a 2-D seismic velocity model in the central Japan (Tsuruga et al., 2005), assuming a time-variant target region with a 200-m thickness along a subducting Philippine Sea plate at 30 km in depth. Seismograms were calculated at a 500-m interval for 260 km long by using FDM software (Larsen, 2000), in the case that P- and S-wave velocities (Vp amd Vs) in the target region decreased about 30 % before to after the change (e.g., Vp=3

  11. The establish and application of equipment reliability database in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Zheng Wei; Li He

    2006-03-01

    Take the case of Daya Bay Nuclear Power Plant, the collecting and handling of equipment reliability data, the calculation method of reliability parameters and the establish and application of reliability databases, etc. are discussed. The data source involved the design information of the equipment, the operation information, the maintenance information and periodically test record, etc. Equipment reliability database built on a base of the operation experience. It provided the valid tool for thoroughly and objectively recording the operation history and the present condition of various equipment of the plant; supervising the appearance of the equipment, especially the safety-related equipment, provided the very practical worth information for enhancing the safety and availability management of the equipment and insuring the safety and economic operation of the plant; and provided the essential data for the research and applications in safety management, reliability analysis, probabilistic safety assessment, reliability centered maintenance and economic management in nuclear power plant. (authors)

  12. Improved FTA methodology and application to subsea pipeline reliability design.

    Science.gov (United States)

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.

  13. Reliability of dc power supplies in nuclear power plant application

    International Nuclear Information System (INIS)

    Eisenhut, D.G.

    1978-01-01

    In June 1977 the reliability of dc power supplies at nuclear power facilities was questioned. It was postulated that a sudden gross failure of the redundant dc power supplies might occur during normal plant operation, and that this could lead to insufficient shutdown cooling of the reactor core. It was further suggested that this potential for insufficient cooling is great enough to warrant consideration of prompt remedies. The work described herein was part of the NRC staff's efforts aimed towards putting the performance of dc power supplies in proper perspective and was mainly directed towards the particular concern raised at that time. While the staff did not attempt to perform a systematic study of overall dc power supply reliability including all possible failure modes for such supplies, the work summarized herein describes how a probabilistic approach was used to supplement our more usual deterministic approach to reactor safety. Our evaluation concluded that the likelihood of dc power supply failures leading to insufficient shutdown cooling of the reactor core is sufficiently small as to not require any immediate action

  14. Design-reliability assurance program application to ACP600

    International Nuclear Information System (INIS)

    Zhichao, Huang; Bo, Zhao

    2012-01-01

    ACP600 is a newly nuclear power plant technology made by CNNC in China and it is based on the Generation III NPPs design experience and general safety goals. The ACP600 Design Reliability Assurance Program (D-RAP) is implemented as an integral part of the ACP600 design process. A RAP is a formal management system which assures the collection of important characteristic information about plant performance throughout each phase of its life and directs the use of this information in the implementation of analytical and management process which are specifically designed to meet two specific objects: confirm the plant goals and cost effective improvements. In general, typical reliability assurance program have 4 broad functional elements: 1) Goals and performance criteria; 2) Management system and implementing procedures; 3) Analytical tools and investigative methods; and 4) Information management. In this paper we will use the D-RAP technical and Risk-Informed requirements, and establish the RAM and PSA model to optimize the ACP600 design. Compared with previous design process, the D-RAP is more competent for the higher design targets and requirements, enjoying more creativity through an easier implementation of technical breakthroughs. By using D-RAP, the plants goals, system goals, performance criteria and safety criteria can be easier to realize, and the design can be optimized and more rational

  15. Capacity and reliability analyses with applications to power quality

    Science.gov (United States)

    Azam, Mohammad; Tu, Fang; Shlapak, Yuri; Kirubarajan, Thiagalingam; Pattipati, Krishna R.; Karanam, Rajaiah

    2001-07-01

    The deregulation of energy markets, the ongoing advances in communication networks, the proliferation of intelligent metering and protective power devices, and the standardization of software/hardware interfaces are creating a dramatic shift in the way facilities acquire and utilize information about their power usage. The currently available power management systems gather a vast amount of information in the form of power usage, voltages, currents, and their time-dependent waveforms from a variety of devices (for example, circuit breakers, transformers, energy and power quality meters, protective relays, programmable logic controllers, motor control centers). What is lacking is an information processing and decision support infrastructure to harness this voluminous information into usable operational and management knowledge to handle the health of their equipment and power quality, minimize downtime and outages, and to optimize operations to improve productivity. This paper considers the problem of evaluating the capacity and reliability analyses of power systems with very high availability requirements (e.g., systems providing energy to data centers and communication networks with desired availability of up to 0.9999999). The real-time capacity and margin analysis helps operators to plan for additional loads and to schedule repair/replacement activities. The reliability analysis, based on computationally efficient sum of disjoint products, enables analysts to decide the optimum levels of redundancy, aids operators in prioritizing the maintenance options for a given budget and monitoring the system for capacity margin. The resulting analytical and software tool is demonstrated on a sample data center.

  16. Applicability of simplified human reliability analysis methods for severe accidents

    Energy Technology Data Exchange (ETDEWEB)

    Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2016-03-15

    Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)

  17. Reliability Investigation of GaN HEMTs for MMICs Applications

    Directory of Open Access Journals (Sweden)

    Alessandro Chini

    2014-08-01

    Full Text Available Results obtained during the evaluation of radio frequency (RF reliability carried out on several devices fabricated with different epi-structure and field-plate geometries will be presented and discussed. Devices without a field-plate structure experienced a more severe degradation when compared to their counterparts while no significant correlation has been observed with respect of the different epi-structure tested. RF stress induced two main changes in the device electrical characteristics, i.e., an increase in drain current dispersion and a reduction in gate-leakage currents. Both of these phenomena can be explained by assuming a density increase of an acceptor trap located beneath the gate contact and in the device barrier layer. Numerical simulations carried out with the aim of supporting the proposed mechanism will also be presented.

  18. An Application of Graph Theory in Markov Chains Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Pavel Skalny

    2014-01-01

    Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.

  19. Application of system reliability analytical method, GO-FLOW

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Kobayashi, Michiyuki

    1999-01-01

    The Ship Research Institute proceed a developmental study on GO-FLOW method with various advancing functionalities for the system reliability analysis method occupying main parts of PSA (Probabilistic Safety Assessment). Here was attempted to intend to upgrade functionality of the GO-FLOW method, to develop an analytical function integrated with dynamic behavior analytical function, physical behavior and probable subject transfer, and to prepare a main accident sequence picking-out function. In 1997 fiscal year, in dynamic event-tree analytical system, an analytical function was developed by adding dependency between headings. In simulation analytical function of the accident sequence, main accident sequence of MRX for improved ship propulsion reactor became possible to be covered perfectly. And, input data for analysis was prepared with a function capable easily to set by an analysis operator. (G.K.)

  20. Application of artificial intelligence techniques to reliability data banks

    International Nuclear Information System (INIS)

    Carlesso, S.; Barbas, T.; Capobianchi, S.; Koletsos, A.; Mancini, G.

    1987-01-01

    This paper refers to ERDS (European Reliability Data System) which contains data on operational behaviour of nuclear power reactors in Europe and in the USA. Information outages, incidents and component failures are organized in data base structures and handled with the ADABAS Data Base Management System; the system has been built up in the last six years at the JRC of the Commission of the European Communities and offers a good example of a complex technical data bank. The effective use of ERDS is difficult and requires a skilled specific experience. A feasibility study and a preliminary design have been carried out concerning the development of an expert interface to ERDS. This paper illustrates the main results of this work focusing in the types of problems involved in the design of an expert interface to a technical data bank and on the solutions proposed. The implementation of the expert interface to ERDS is presently in progress. (orig./HSCH)

  1. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  2. Application of reliability worth concepts in power system operational planning

    Energy Technology Data Exchange (ETDEWEB)

    Mello, J C.O. [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil); Silva, A.M. Leite da [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil); Pereira, M V.F. [Power System Research (PSR), Inc., Rio de Janeiro, RJ (Brazil)

    1994-12-31

    This work describes the application of a new methodology for calculating total system interruption costs in power system operational planning. Some important operational aspects are discussed: chronological load curves, customer damage functions for each consumer class, maintenance scheduling and non-exponential repair times. It is also presented the calculation of the probability distribution of the system interruption cost to improve the decision making process associated with alternative operational strategies. The Brazilian Southeastern system is used to illustrate all previous applications. (author) 24 refs., 8 figs., 4 tabs.

  3. Development of RBDGG Solver and Its Application to System Reliability Analysis

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    For the purpose of making system reliability analysis easier and more intuitive, RBDGG (Reliability Block diagram with General Gates) methodology was introduced as an extension of the conventional reliability block diagram. The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system, and therefore the modeling of a system for system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar with that of the development of the RGGG (Reliability Graph with General Gates) methodology, which is an extension of a conventional reliability graph. The newly proposed methodology is now implemented into a software tool, RBDGG Solver. RBDGG Solver was developed as a WIN32 console application. RBDGG Solver receives information on the failure modes and failure probabilities of each component in the system, along with the connection structure and connection logics among the components in the system. Based on the received information, RBDGG Solver automatically generates a system reliability analysis model for the system, and then provides the analysis results. In this paper, application of RBDGG Solver to the reliability analysis of an example system, and verification of the calculation results are provided for the purpose of demonstrating how RBDGG Solver is used for system reliability analysis

  4. [Reliability of iWitness photogrammetry in maxillofacial application].

    Science.gov (United States)

    Jiang, Chengcheng; Song, Qinggao; He, Wei; Chen, Shang; Hong, Tao

    2015-06-01

    This study aims to test the accuracy and precision of iWitness photogrammetry for measuring the facial tissues of mannequin head. Under ideal circumstances, the 3D landmark coordinates were repeatedly obtained from a mannequin head using iWitness photogrammetric system with different parameters, to examine the precision of this system. The differences between the 3D data and their true distance values of mannequin head were computed. Operator error of 3D system in non-zoom and zoom status were 0.20 mm and 0.09 mm, and the difference was significant (P 0.05). Image captured error of 3D system was 0.283 mm, and there was no significant difference compared with the same group of images (P>0.05). Error of 3D systen with recalibration was 0.251 mm, and the difference was not statistically significant compared with image captured error (P>0.05). Good congruence was observed between means derived from the 3D photos and direct anthropometry, with difference ranging from -0.4 mm to +0.4 mm. This study provides further evidence of the high reliability of iWitness photogrammetry for several craniofacial measurements, including landmarks and inter-landmark distances. The evaluated system can be recommended for the evaluation and documentation of the facial surface.

  5. Application of analytical procedure on system reliability, GO-FLOW

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Kobayashi, Michiyuki

    2000-01-01

    In the Ship Research Institute, research and development of GO-FLOW procedure with various advanced functions as a system reliability analysis method occupying main part of the probabilistic safety assessment (PSA) were promoted. In this study, as an important evaluation technique on executing PSA with lower than level 3, by intending fundamental upgrading of the GO-FLOW procedure, a safety assessment system using the GO-FLOW as well as an analytical function coupling of dynamic behavior analytical function and physical behavior of the system with stochastic phenomenon change were developed. In 1998 fiscal year, preparation and verification of various functions such as dependence addition between the headings, rearrangement in order of time, positioning of same heading to plural positions, calculation of forming frequency with elapsing time were carried out. And, on a simulation analysis function of accident sequence, confirmation on analysis covering all of main accident sequence in the reactor for improved marine reactor, MRX was carried out. In addition, a function near automatically producible on input data for analysis was also prepared. As a result, the conventional analysis not always easy understanding on analytical results except an expert of PSA was solved, and understanding of the accident phenomenon, verification of validity on analysis, feedback to analysis, and feedback to design could be easily carried out. (G.K.)

  6. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    Science.gov (United States)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  7. Application of response surfaces for reliability analysis of marine structures

    International Nuclear Information System (INIS)

    Leira, Bernt J.; Holmas, Tore; Herfjord, Kjell

    2005-01-01

    Marine structures subjected to multiple environmental loads (i.e. waves, current, wind) are considered. These loads are characterized by a set of corresponding parameters. The structural fatigue damage and long-term response are expressed in terms of these environmental parameters based on application of polynomial response surfaces. For both types of analysis, an integration across the range of variation for all the environmental parameters is required. The location of the intervals which give rise to the dominant contribution for these integrals depends on the relative magnitude of the coefficients defining the polynomials. The required degree of numerical subdivision in order to obtain accurate results is also of interest. These issues are studied on a non-dimensional form. The loss of accuracy which results when applying response surfaces of too low order is also investigated. Response surfaces with cut-off limits at specific lower-bound values for the environmental parameters are further investigated. Having obtained general expressions on non-dimensional form, examples which correspond to specific response quantities for marine structures are considered. Typical values for the polynomial coefficients, and for the statistical distributions representing the environmental parameters, are applied. Convergence studies are subsequently performed for the particular example response quantities in order to make comparison with the general formulation. For the extreme response, the application of 'extreme contours' obtained from the statistical distributions of the environmental parameters is explored

  8. Non-linear time variant model intended for polypyrrole-based actuators

    Science.gov (United States)

    Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh

    2014-03-01

    Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.

  9. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  10. Time-variant partial directed coherence in analysis of the cardiovascular system. A methodological study

    International Nuclear Information System (INIS)

    Milde, T; Schwab, K; Walther, M; Eiselt, M; Witte, H; Schelenz, C; Voss, A

    2011-01-01

    Time-variant partial directed coherence (tvPDC) is used for the first time in a multivariate analysis of heart rate variability (HRV), respiratory movements (RMs) and (systolic) arterial blood pressure. It is shown that respiration-related HRV components which also occur at other frequencies besides the RM frequency (= respiratory sinus arrhythmia, RSA) can be identified. These additional components are known to be an effect of the 'half-the-mean-heart-rate-dilemma' ('cardiac aliasing' CA). These CA components may contaminate the entire frequency range of HRV and can lead to misinterpretation of the RSA analysis. TvPDC analysis of simulated and clinical data (full-term neonates and sedated patients) reveals these contamination effects and, in addition, the respiration-related CA components can be separated from the RSA component and the Traube–Hering–Mayer wave. It can be concluded that tvPDC can be beneficially applied to avoid misinterpretations in HRV analyses as well as to quantify partial correlative interaction properties between RM and RSA

  11. A Multistage Decision-Feedback Receiver Design for LTE Uplink in Mobile Time-Variant Environments

    Directory of Open Access Journals (Sweden)

    Juinn-Horng Deng

    2012-01-01

    Full Text Available Single-carrier-frequency division multiple access (SC-FDMA has recently become the preferred uplink transmission scheme in long-term evolution (LTE systems. Similar to orthogonal frequency division multiple access (OFDMA, SC-FDMA is highly sensitive to frequency offsets caused by oscillator inaccuracies and Doppler spread, which lead to intercarrier interference (ICI. This work proposes a multistage decision-feedback structure to mitigate the ICI effect and enhance system performance in time-variant environments. Based on the block-type pilot arrangement of the LTE uplink type 1 frame structure, the time-domain least squares (TDLS method and polynomial-based curve-fitting algorithm are employed for channel estimation. Instead of using a conventional equalizer, this work uses a group frequency-domain equalizer (GFDE to reduce computational complexity. Furthermore, this work utilizes a dual iterative structure of group parallel interference cancellation (GPIC and frequency-domain group parallel interference cancellation (FPIC to mitigate the ICI effect. Finally, to optimize system performance, this work applies a novel error-correction scheme. Simulation results demonstrate the bit error rate (BER performance is markedly superior to that of the conventional full-size receiver based on minimum mean square error (MMSE. This structure performs well and is a flexible choice in mobile environments using the SC-FDMA scheme.

  12. Reliability in mechanics: the application of experience feedback

    International Nuclear Information System (INIS)

    Coudray, R.

    1994-01-01

    After a short overview of the available methods for statistical multi-dimensional studies, an application of these methods is described using the experience feedback of French nuclear reactors. The material studied is the RCV (chemical and volumetric control system) pump of the 900 MW PWR type reactors for which data used in the study are explained. The aim of the study is to show the pertinency of the rate of failures as an indicator of the material aging. This aging is illustrated by the most significant characteristics with an indication of their significance level. The method used combines the results from a mixed classification and those from a multiple correspondences analysis in several steps or evolutions. (J.S.). 8 refs., 6 figs., 3 tabs

  13. Materials and processes for spacecraft and high reliability applications

    CERN Document Server

    D Dunn, Barrie

    2016-01-01

    The objective of this book is to assist scientists and engineers select the ideal material or manufacturing process for particular applications; these could cover a wide range of fields, from light-weight structures to electronic hardware. The book will help in problem solving as it also presents more than 100 case studies and failure investigations from the space sector that can, by analogy, be applied to other industries. Difficult-to-find material data is included for reference. The sciences of metallic (primarily) and organic materials presented throughout the book demonstrate how they can be applied as an integral part of spacecraft product assurance schemes, which involve quality, material and processes evaluations, and the selection of mechanical and component parts. In this successor edition, which has been revised and updated, engineering problems associated with critical spacecraft hardware and the space environment are highlighted by over 500 illustrations including micrographs and fractographs. Sp...

  14. Selecting reliable and robust freshwater macroalgae for biomass applications.

    Directory of Open Access Journals (Sweden)

    Rebecca J Lawton

    Full Text Available Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m⁻² day⁻¹, lowest ash content (3-8%, lowest water content (fresh weigh: dry weight ratio of 3.4, highest carbon content (45% and highest bioenergy potential (higher heating value 20 MJ/kg compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO₂ across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E. in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E. in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E. in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with

  15. Rater reliability and construct validity of a mobile application for posture analysis.

    Science.gov (United States)

    Szucs, Kimberly A; Brown, Elena V Donoso

    2018-01-01

    [Purpose] Measurement of posture is important for those with a clinical diagnosis as well as researchers aiming to understand the impact of faulty postures on the development of musculoskeletal disorders. A reliable, cost-effective and low tech posture measure may be beneficial for research and clinical applications. The purpose of this study was to determine rater reliability and construct validity of a posture screening mobile application in healthy young adults. [Subjects and Methods] Pictures of subjects were taken in three standing positions. Two raters independently digitized the static standing posture image twice. The app calculated posture variables, including sagittal and coronal plane translations and angulations. Intra- and inter-rater reliability were calculated using the appropriate ICC models for complete agreement. Construct validity was determined through comparison of known groups using repeated measures ANOVA. [Results] Intra-rater reliability ranged from 0.71 to 0.99. Inter-rater reliability was good to excellent for all translations. ICCs were stronger for translations versus angulations. The construct validity analysis found that the app was able to detect the change in the four variables selected. [Conclusion] The posture mobile application has demonstrated strong rater reliability and preliminary evidence of construct validity. This application may have utility in clinical and research settings.

  16. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    Science.gov (United States)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  17. A time-variant analysis of the 1/f^(2) phase noise in CMOS parallel LC-Tank quadrature oscillators

    DEFF Research Database (Denmark)

    Andreani, Pietro

    2006-01-01

    This paper presents a study of 1/f2 phase noise in quadrature oscillators built by connecting two differential LC-tank oscillators in a parallel fashion. The analysis clearly demonstrates the necessity of adopting a time-variant theory of phase noise, where a more simplistic, time...

  18. Addressing the problem of the relevance of reliability data to varied applications

    International Nuclear Information System (INIS)

    McIntyre, P.J.; Gibson, I.K.

    1989-01-01

    Reliability data is collected for many reasons on a wide range of components and applications. Sometimes data is collected for a specific purpose whilst in other situations data may be collected simply to provide an available pool of historical data. Data can also be extracted from information that was gathered without recognition that it could be adapted for use as reliability data at a later stage. It is not surprising that there should be significant differences in the strengths and weaknesses of data obtained in such different circumstances. This paper describes work undertaken to investigate how to make best use of available data to provide specific and reliable predictions of valve reliability for nuclear power station applications. (orig.)

  19. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  20. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  1. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  2. Risk and reliability analysis theory and applications : in honor of Prof. Armen Der Kiureghian

    CERN Document Server

    2017-01-01

    This book presents a unique collection of contributions from some of the foremost scholars in the field of risk and reliability analysis. Combining the most advanced analysis techniques with practical applications, it is one of the most comprehensive and up-to-date books available on risk-based engineering. All the fundamental concepts needed to conduct risk and reliability assessments are covered in detail, providing readers with a sound understanding of the field and making the book a powerful tool for students and researchers alike. This book was prepared in honor of Professor Armen Der Kiureghian, one of the fathers of modern risk and reliability analysis.

  3. Incorporating temporal variation in seabird telemetry data: time variant kernel density models

    Science.gov (United States)

    Gilbert, Andrew; Adams, Evan M.; Anderson, Carl; Berlin, Alicia; Bowman, Timothy D.; Connelly, Emily; Gilliland, Scott; Gray, Carrie E.; Lepage, Christine; Meattey, Dustin; Montevecchi, William; Osenkowski, Jason; Savoy, Lucas; Stenhouse, Iain; Williams, Kathryn

    2015-01-01

    A key component of the Mid-Atlantic Baseline Studies project was tracking the individual movements of focal marine bird species (Red-throated Loon [Gavia stellata], Northern Gannet [Morus bassanus], and Surf Scoter [Melanitta perspicillata]) through the use of satellite telemetry. This element of the project was a collaborative effort with the Department of Energy (DOE), Bureau of Ocean Energy Management (BOEM), the U.S. Fish and Wildlife Service (USFWS), and Sea Duck Joint Venture (SDJV), among other organizations. Satellite telemetry is an effective and informative tool for understanding individual animal movement patterns, allowing researchers to mark an individual once, and thereafter follow the movements of the animal in space and time. Aggregating telemetry data from multiple individuals can provide information about the spatial use and temporal movements of populations. Tracking data is three dimensional, with the first two dimensions, X and Y, ordered along the third dimension, time. GIS software has many capabilities to store, analyze and visualize the location information, but little or no support for visualizing the temporal data, and tools for processing temporal data are lacking. We explored several ways of analyzing the movement patterns using the spatiotemporal data provided by satellite tags. Here, we present the results of one promising method: time-variant kernel density analysis (Keating and Cherry, 2009). The goal of this chapter is to demonstrate new methods in spatial analysis to visualize and interpret tracking data for a large number of individual birds across time in the mid-Atlantic study area and beyond. In this chapter, we placed greater emphasis on analytical methods than on the behavior and ecology of the animals tracked. For more detailed examinations of the ecology and wintering habitat use of the focal species in the midAtlantic, see Chapters 20-22.

  4. A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application

    Directory of Open Access Journals (Sweden)

    Zhao Wu

    2013-01-01

    Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.

  5. Solutions to time variant problems of real-time expert systems

    Science.gov (United States)

    Yeh, Show-Way; Wu, Chuan-Lin; Hung, Chaw-Kwei

    1988-01-01

    Real-time expert systems for monitoring and control are driven by input data which changes with time. One of the subtle problems of this field is the propagation of time variant problems from rule to rule. This propagation problem is even complicated under a multiprogramming environment where the expert system may issue test commands to the system to get data and to access time consuming devices to retrieve data for concurrent reasoning. Two approaches are used to handle the flood of input data. Snapshots can be taken to freeze the system from time to time. The expert system treats the system as a stationary one and traces changes by comparing consecutive snapshots. In the other approach, when an input is available, the rules associated with it are evaluated. For both approaches, if the premise condition of a fired rule is changed to being false, the downstream rules should be deactivated. If the status change is due to disappearance of a transient problem, actions taken by the fired downstream rules which are no longer true may need to be undone. If a downstream rule is being evaluated, it should not be fired. Three mechanisms for solving this problem are discussed: tracing, backward checking, and censor setting. In the forward tracing mechanism, when the premise conditions of a fired rule become false, the premise conditions of downstream rules which have been fired or are being evaluated due to the firing of that rule are reevaluated. A tree with its root at the rule being deactivated is traversed. In the backward checking mechanism, when a rule is being fired, the expert system checks back on the premise conditions of the upstream rules that result in evaluation of the rule to see whether it should be fired. The root of the tree being traversed is the rule being fired. In the censor setting mechanism, when a rule is to be evaluated, a censor is constructed based on the premise conditions of the upstream rules and the censor is evaluated just before the rule is

  6. System-level Reliability Assessment of Power Stage in Fuel Cell Application

    DEFF Research Database (Denmark)

    Zhou, Dao; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    reliability. In a case study of a 5 kW fuel cell power stage, the parameter variations of the lifetime model prove that the exponential factor of the junction temperature fluctuation is the most sensitive parameter. Besides, if a 5-out-of-6 redundancy is used, it is concluded both the B10 and the B1 system......High efficient and less pollutant fuel cell stacks are emerging and strong candidates of the power solution used for mobile base stations. In the application of the backup power, the availability and reliability hold the highest priority. This paper considers the reliability metrics from...... the component-level to the system-level for the power stage used in a fuel cell application. It starts with an estimation of the annual accumulated damage for the key power electronic components according to the real mission profile of the fuel cell system. Then, considering the parameter variations in both...

  7. Reliability of Two Smartphone Applications for Radiographic Measurements of Hallux Valgus Angles.

    Science.gov (United States)

    Mattos E Dinato, Mauro Cesar; Freitas, Marcio de Faria; Milano, Cristiano; Valloto, Elcio; Ninomiya, André Felipe; Pagnano, Rodrigo Gonçalves

    The objective of the present study was to assess the reliability of 2 smartphone applications compared with the traditional goniometer technique for measurement of radiographic angles in hallux valgus and the time required for analysis with the different methods. The radiographs of 31 patients (52 feet) with a diagnosis of hallux valgus were analyzed. Four observers, 2 with >10 years' experience in foot and ankle surgery and 2 in-training surgeons, measured the hallux valgus angle and intermetatarsal angle using a manual goniometer technique and 2 smartphone applications (Hallux Angles and iPinPoint). The interobserver and intermethod reliability were estimated using intraclass correlation coefficients (ICCs), and the time required for measurement of the angles among the 3 methods was compared using the Friedman test. A very good or good interobserver reliability was found among the 4 observers measuring the hallux valgus angle and intermetatarsal angle using the goniometer (ICC 0.913 and 0.821, respectively) and iPinPoint (ICC 0.866 and 0.638, respectively). Using the Hallux Angles application, a very good interobserver reliability was found for measurements of the hallux valgus angle (ICC 0.962) and intermetatarsal angle (ICC 0.935) only among the more experienced observers. The time required for the measurements was significantly shorter for the measurements using both smartphone applications compared with the goniometer method. One smartphone application (iPinPoint) was reliable for measurements of the hallux valgus angles by either experienced or nonexperienced observers. The use of these tools might save time in the evaluation of radiographic angles in the hallux valgus. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  9. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    DEFF Research Database (Denmark)

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.

    2000-01-01

    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking the ...... similar safety systems. The database was established with Microsoft Access DatabaseManagement System, the software for reliability and availability assessments was created with Visual Basic....... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical......The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...

  10. Distributed Information and Control system reliability enhancement by fog-computing concept application

    Science.gov (United States)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-03-01

    The paper focuses on the information and control system reliability issue. Authors of the current paper propose a new complex approach of information and control system reliability enhancement by application of the computing concept elements. The approach proposed consists of a complex of optimization problems to be solved. These problems are: estimation of computational complexity, which can be shifted to the edge of the network and fog-layer, distribution of computations among the data processing elements and distribution of computations among the sensors. The problems as well as some simulated results and discussion are formulated and presented within this paper.

  11. A Survey on the Reliability of Power Electronics in Electro-Mobility Applications

    DEFF Research Database (Denmark)

    Gadalla, Brwene Salah Abdelkarim; Schaltz, Erik; Blaabjerg, Frede

    2015-01-01

    Reliability is an important issue in the field of power electronics since most of the electrical energy is today processed by power electronics. In most of the electro-mobility applications, e.g. electric and hybridelectric vehicles, power electronic are commonly used in very harsh environment...... and extending the service lifetime as well. Research within power electronics is of high interest as it has an important impact in the industry of the electro-mobility applications. According to the aforementioned explanations, this paper will provide an overview of the common factors (thermal cycles, power...... cycles, vibrations, voltage stress and current ripple stress) affecting the reliability of power electronics in electromobility applications. Also, the researchers perspective is summarized from 2001 to 2015....

  12. Application of Fault Tree Analysis for Estimating Temperature Alarm Circuit Reliability

    International Nuclear Information System (INIS)

    El-Shanshoury, A.I.; El-Shanshoury, G.I.

    2011-01-01

    Fault Tree Analysis (FTA) is one of the most widely-used methods in system reliability analysis. It is a graphical technique that provides a systematic description of the combinations of possible occurrences in a system, which can result in an undesirable outcome. The presented paper deals with the application of FTA method in analyzing temperature alarm circuit. The criticality failure of this circuit comes from failing to alarm when temperature exceeds a certain limit. In order for a circuit to be safe, a detailed analysis of the faults causing circuit failure is performed by configuring fault tree diagram (qualitative analysis). Calculations of circuit quantitative reliability parameters such as Failure Rate (FR) and Mean Time between Failures (MTBF) are also done by using Relex 2009 computer program. Benefits of FTA are assessing system reliability or safety during operation, improving understanding of the system, and identifying root causes of equipment failures

  13. Improving the reliability of nuclear reprocessing by application of computers and mathematical modelling

    International Nuclear Information System (INIS)

    Gabowitsch, E.; Trauboth, H.

    1982-01-01

    After a brief survey of the present and expected future state of nuclear energy utilization, which should demonstrate the significance of nuclear reprocessing, safety and reliability aspects of nuclear reprocessing plants (NRP) are considered. Then, the principal possibilities of modern computer technology including computer systems architecture and application-oriented software for improving the reliability and availability are outlined. In this context, two information systems being developed at the Nuclear Research Center Karlsruhe (KfK) are briefly described. For design evaluation of certain areas of a large NRP mathematical methods and computer-aided tools developed, used or being designed by KfK are discussed. In conclusion, future research to be pursued in information processing and applied mathematics in support of reliable operation of NRP's is proposed. (Auth.)

  14. Sensitivity Weaknesses in Application of some Statistical Distribution in First Order Reliability Methods

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Enevoldsen, I.

    1993-01-01

    It has been observed and shown that in some examples a sensitivity analysis of the first order reliability index results in increasing reliability index, when the standard deviation for a stochastic variable is increased while the expected value is fixed. This unfortunate behaviour can occur when...... a stochastic variable is modelled by an asymmetrical density function. For lognormally, Gumbel and Weibull distributed stochastic variables it is shown for which combinations of the/3-point, the expected value and standard deviation the weakness can occur. In relation to practical application the behaviour...... is probably rather infrequent. A simple example is shown as illustration and to exemplify that for second order reliability methods and for exact calculations of the probability of failure this behaviour is much more infrequent....

  15. Reliability-Centric Analysis of Offloaded Computation in Cooperative Wearable Applications

    Directory of Open Access Journals (Sweden)

    Aleksandr Ometov

    2017-01-01

    Full Text Available Motivated by the unprecedented penetration of mobile communications technology, this work carefully brings into perspective the challenges related to heterogeneous communications and offloaded computation operating in cases of fault-tolerant computation, computing, and caching. We specifically focus on the emerging augmented reality applications that require reliable delegation of the computing and caching functionality to proximate resource-rich devices. The corresponding mathematical model proposed in this work becomes of value to assess system-level reliability in cases where one or more nearby collaborating nodes become temporarily unavailable. Our produced analytical and simulation results corroborate the asymptotic insensitivity of the stationary reliability of the system in question (under the “fast” recovery of its elements to the type of the “repair” time distribution, thus supporting the fault-tolerant system operation.

  16. Application of safety and reliability approaches in the power sector: Inside-sectoral overview

    DEFF Research Database (Denmark)

    Kozine, Igor

    2010-01-01

    This chapter summarizes the state-of-the-art and state-of-practice on the applications of safety and reliability approaches in the Power Sector. The nature and composition of this industrial sector including the characteristics of major hazards are summarized. The present situation with regard...... to a number of key technical aspects involved in the use of safety and reliability approaches in the power sector is discussed. Based on this review a Technology Maturity Matrix is synthesized. Barriers to the wider use of risk and reliability methods in the design and operation of power installations...... are identified and possible ways of overcoming these barriers are suggested. Key issues and priorities for research are identified....

  17. Instrument reliability for high-level nuclear-waste-repository applications

    International Nuclear Information System (INIS)

    Rogue, F.; Binnall, E.P.; Armantrout, G.A.

    1983-01-01

    Reliable instrumentation will be needed to evaluate the characteristics of proposed high-level nuclear-wasted-repository sites and to monitor the performance of selected sites during the operational period and into repository closure. A study has been done to assess the reliability of instruments used in Department of Energy (DOE) waste repository related experiments and in other similar geological applications. The study included experiences with geotechnical, hydrological, geochemical, environmental, and radiological instrumentation and associated data acquisition equipment. Though this paper includes some findings on the reliability of instruments in each of these categories, the emphasis is on experiences with geotechnical instrumentation in hostile repository-type environments. We review the failure modes, rates, and mechanisms, along with manufacturers modifications and design changes to enhance and improve instrument performance; and include recommendations on areas where further improvements are needed

  18. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  19. Finding an acceleration function for calculating the reliability of redundant systems - Application to common mode failures

    International Nuclear Information System (INIS)

    Gonnot, R.

    1975-01-01

    While it may be reasonable to assume that the reliability of a system - the design of which is perfectly known - can be evaluated, it seems less easy to be sure that overall reliability is correctly estimated in the case of multiple redundancies arranged in sequence. Framatome is trying to develop a method of evaluating overall reliability correctly for its installations. For example, the protection systems in its power stations considered as a whole are such that several scram signals may be relayed in sequence when an incident occurs. These signals all involve the same components for a given type of action, but the components themselves are in fact subject to different stresses and constraints, which tend to reduce their reliability. Whatever the sequence in which these signals are transmitted (in a fast-developing accident, for example), it is possible to evaluate the actual reliability of a given system (or component) for different constraints, as the latter are generally obtained via the transient codes. By applying the so-called ''equal probability'' hypothesis one can estimate a reliability acceleration function taking into account the constraints imposed. This function is linear for the principal failure probability distribution laws. By generalizing such a method one can: (1) Perform failure calculations for redundant systems (or components) in a more general way than is possible with event trees, since one of the main parameters is the constraint exercised on that system (or component); (2) Determine failure rates of components on the basis of accelerated tests (up to complete failure of the component) which are quicker than the normal long-term tests (statistical results of operation); (3) Evaluate the multiplication factor for the reliability of a system or component in the case of common mode failures. The author presents the mathematical tools required for such a method and described their application in the cases mentioned above

  20. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    Energy Technology Data Exchange (ETDEWEB)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz [Air Force Institute of Technology ul. Księcia Bolesława 6 01-494 Warsaw (Poland)

    2016-06-08

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  1. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature

  2. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    International Nuclear Information System (INIS)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz

    2016-01-01

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  3. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br

    2009-04-15

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature.

  4. Reliability of structures of industrial installations. Theory and applications of probabilistic mechanics

    International Nuclear Information System (INIS)

    Procaccia, H.; Morilhat, P.; Carle, R.; Menjon, G.

    1996-01-01

    The management of the service life of mechanical materials implies an evaluation of their risk of failure during their use. To evaluate this risk the following methods are used: the classical frequency statistics applied to experience feedback data concerning failures noticed during operation of active parts (pumps, valves, exchangers, circuit breakers etc..); the Bayesian approach in the case of scarce statistical data and when experts are needed to compensate the lack of information; the structures reliability approach when no data are available and when a theoretical model of degradation must be used, in particular for passive structures (pressure vessels, pipes, tanks, etc..). The aim of this book is to describe the principles and applications of this third approach to industrial installations. Chapter 1 recalls the historical aspects of the probabilistic approach to the reliability of structures and the existing codes. Chapter 2 presents the level 1 deterministic method applied so far for the conceiving of passive structures. The Cornell reliability index, already used in civil engineering codes, is defined in chapter 3. The Hasofer and Lind reliability index, a generalization of the Cornell index, is defined in chapter 4. Chapter 5 concerns the application of probabilistic approaches to optimization studies with the introduction of the economical variables linked to the risk and the possible actions to limit this risk (in-service inspection, maintenance, repairing etc..). Chapters 6 and 7 describe the Monte Carlo simulation and approximation methods for failure probabilistic calculations, and recall the fracture mechanics basis and the models of load and degradation of industrial installations. Some applications are given in chapter 9 with the cases of the safety margins quantization of a fissured pipe and the optimizing of the in-service inspection policy of a steam generator. Chapter 10 raises the problem of the coupling between mechanical and reliability

  5. Techniques and applications of the human reliability analysis in nuclear facilities

    International Nuclear Information System (INIS)

    Pinto, Fausto C.

    1995-01-01

    The analysis and prediction of the man-machine interaction are the objectives of human reliability analysis. In this work is presented in a manner that could be used by experts in the field of Probabilistic Safety Assessment, considering primarily the aspects of human errors. The Technique of Human Error Rate Prediction (THERP) is used in large scale to obtain data on human error. Applications of this technique are presented, as well as aspects of the state-of-art and of research and development of this particular field of work, where the construction of a reliable data bank is considered essential. In this work is also developed an application of the THERP for the TRIGA Mark 1 IPR R-1 Reactor of the Centro de Desenvolvimento de Tecnologia Nuclear, Brazilian research institute of nuclear technology. The results indicate that some changes must be made in the emergency procedures of the reactor, in order to achieve a higher level of safety

  6. Reliability of Capacitors for DC-Link Applications in Power Electronic Converters

    DEFF Research Database (Denmark)

    Wang, Huai; Blaabjerg, Frede

    2014-01-01

    DC-link capacitors are an important part in the majority of power electronic converters which contribute to cost, size and failure rate on a considerable scale. From capacitor users' viewpoint, this paper presents a review on the improvement of reliability of dc link in power electronic converters...... from two aspects: 1) reliability-oriented dc-link design solutions; 2) conditioning monitoring of dc-link capacitors during operation. Failure mechanisms, failure modes and lifetime models of capacitors suitable for the applications are also discussed as a basis to understand the physics......-of-failure. This review serves to provide a clear picture of the state-of-the-art research in this area and to identify the corresponding challenges and future research directions for capacitors and their dc-link applications....

  7. Reliability design of a critical facility: An application of PRA methods

    International Nuclear Information System (INIS)

    Souza Vieira Neto, A.; Souza Borges, W. de

    1987-01-01

    Although a general agreement concerning the enforcement of reliability (probabilistic) design criteria for nuclear utilities is yet to be achieved. PRA methodology can still be used successfully as a project design and review tool, aimed at improving system's prospective performance or minimizing expected accident consequences. In this paper, the potential of such an application of PRA methods is examined in the special case of a critical design project currently being developed in Brazil. (orig.)

  8. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  9. An overall methodology for reliability prediction of mechatronic systems design with industrial application

    International Nuclear Information System (INIS)

    Habchi, Georges; Barthod, Christine

    2016-01-01

    We propose in this paper an overall ten-step methodology dedicated to the analysis and quantification of reliability during the design phase of a mechatronic system, considered as a complex system. The ten steps of the methodology are detailed according to the downward side of the V-development cycle usually used for the design of complex systems. Two main phases of analysis are complementary and cover the ten steps, qualitative analysis and quantitative analysis. The qualitative phase proposes to analyze the functional and dysfunctional behavior of the system and then determine its different failure modes and degradation states, based on external and internal functional analysis, organic and physical implementation, and dependencies between components, with consideration of customer specifications and mission profile. The quantitative phase is used to calculate the reliability of the system and its components, based on the qualitative behavior patterns, and considering data gathering and processing and reliability targets. Systemic approach is used to calculate the reliability of the system taking into account: the different technologies of a mechatronic system (mechanics, electronics, electrical .), dependencies and interactions between components and external influencing factors. To validate the methodology, the ten steps are applied to an industrial system, the smart actuator of Pack'Aero Company. - Highlights: • A ten-step methodology for reliability prediction of mechatronic systems design. • Qualitative and quantitative analysis for reliability evaluation using PN and RBD. • A dependency matrix proposal, based on the collateral and functional interactions. • Models consider mission profile, deterioration, interactions and influent factors. • Application and validation of the methodology on the “Smart Actuator” of PACK’AERO.

  10. Quantum Dynamics of Multi Harmonic Oscillators Described by Time Variant Conic Hamiltonian and their Use in Contemporary Sciences

    International Nuclear Information System (INIS)

    Demiralp, Metin

    2010-01-01

    This work focuses on the dynamics of a system of quantum multi harmonic oscillators whose Hamiltonian is conic in positions and momenta with time variant coefficients. While it is simple, this system is useful for modeling the dynamics of a number of systems in contemporary sciences where the equations governing spatial or temporal changes are described by sets of ODEs. The dynamical causal models used readily in neuroscience can be indirectly described by these systems. In this work, we want to show that it is possible to describe these systems using quantum wave function type entities and expectations if the dynamic of the system is related to a set of ODEs.

  11. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  12. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  13. Proceeding of 35th domestic symposium on applications of structural reliability and risk assessment methods to nuclear power plants

    International Nuclear Information System (INIS)

    2005-06-01

    As the 35th domestic symposium of Atomic Energy Research Committee, the Japan Welding Engineering Society, the symposium was held titled as Applications of structural reliability/risk assessment methods to nuclear energy'. Six speakers gave lectures titled as 'Structural reliability and risk assessment methods', 'Risk-informed regulation of US nuclear energy and role of probabilistic risk assessment', 'Reliability and risk assessment methods in chemical plants', 'Practical structural design methods based on reliability in architectural and civil areas', 'Maintenance activities based on reliability in thermal power plants' and 'LWR maintenance strategies based on Probabilistic Fracture Mechanics'. (T. Tanaka)

  14. Validity and Reliability of Assessing Body Composition Using a Mobile Application.

    Science.gov (United States)

    Macdonald, Elizabeth Z; Vehrs, Pat R; Fellingham, Gilbert W; Eggett, Dennis; George, James D; Hager, Ronald

    2017-12-01

    The purpose of this study was to determine the validity and reliability of the LeanScreen (LS) mobile application that estimates percent body fat (%BF) using estimates of circumferences from photographs. The %BF of 148 weight-stable adults was estimated once using dual-energy x-ray absorptiometry (DXA). Each of two administrators assessed the %BF of each subject twice using the LS app and manually measured circumferences. A mixed-model ANOVA and Bland-Altman analyses were used to compare the estimates of %BF obtained from each method. Interrater and intrarater reliabilities values were determined using multiple measurements taken by each of the two administrators. The LS app and manually measured circumferences significantly underestimated (P < 0.05) the %BF determined using DXA by an average of -3.26 and -4.82 %BF, respectively. The LS app (6.99 %BF) and manually measured circumferences (6.76 %BF) had large limits of agreement. All interrater and intrarater reliability coefficients of estimates of %BF using the LS app and manually measured circumferences exceeded 0.99. The estimates of %BF from manually measured circumferences and the LS app were highly reliable. However, these field measures are not currently recommended for the assessment of body composition because of significant bias and large limits of agreements.

  15. Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications

    Science.gov (United States)

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497

  16. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  17. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  18. Remote Sensing Applications with High Reliability in Changjiang Water Resource Management

    Science.gov (United States)

    Ma, L.; Gao, S.; Yang, A.

    2018-04-01

    Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR) composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  19. REMOTE SENSING APPLICATIONS WITH HIGH RELIABILITY IN CHANGJIANG WATER RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    L. Ma

    2018-04-01

    Full Text Available Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  20. Reliability analysis of an offshore structure

    DEFF Research Database (Denmark)

    Sorensen, J. D.; Faber, M. H.; Thoft-Christensen, P.

    1992-01-01

    A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittle fracture and crack through the tubular member walls. The stochastic modelling is described. The hot spot stress spectral moments as function of the stochasti...

  1. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  2. Stochastic Petri nets for the reliability analysis of communication network applications with alternate-routing

    International Nuclear Information System (INIS)

    Balakrishnan, Meera; Trivedi, Kishor S.

    1996-01-01

    In this paper, we present a comparative reliability analysis of an application on a corporate B-ISDN network under various alternate-routing protocols. For simple cases, the reliability problem can be cast into fault-tree models and solved rapidly by means of known methods. For more complex scenarios, state space (Markov) models are required. However, generation of large state space models can get very labor intensive and error prone. We advocate the use of stochastic reward nets (a variant of stochastic Petri nets) for the concise specification, automated generation and solution of alternate-routing protocols in networks. This paper is written in a tutorial style so as to make it accessible to a large audience

  3. Standard semiconductor packaging for high-reliability low-cost MEMS applications

    Science.gov (United States)

    Harney, Kieran P.

    2005-01-01

    Microelectronic packaging technology has evolved over the years in response to the needs of IC technology. The fundamental purpose of the package is to provide protection for the silicon chip and to provide electrical connection to the circuit board. Major change has been witnessed in packaging and today wafer level packaging technology has further revolutionized the industry. MEMS (Micro Electro Mechanical Systems) technology has created new challenges for packaging that do not exist in standard ICs. However, the fundamental objective of MEMS packaging is the same as traditional ICs, the low cost and reliable presentation of the MEMS chip to the next level interconnect. Inertial MEMS is one of the best examples of the successful commercialization of MEMS technology. The adoption of MEMS accelerometers for automotive airbag applications has created a high volume market that demands the highest reliability at low cost. The suppliers to these markets have responded by exploiting standard semiconductor packaging infrastructures. However, there are special packaging needs for MEMS that cannot be ignored. New applications for inertial MEMS devices are emerging in the consumer space that adds the imperative of small size to the need for reliability and low cost. These trends are not unique to MEMS accelerometers. For any MEMS technology to be successful the packaging must provide the basic reliability and interconnection functions, adding the least possible cost to the product. This paper will discuss the evolution of MEMS packaging in the accelerometer industry and identify the main issues that needed to be addressed to enable the successful commercialization of the technology in the automotive and consumer markets.

  4. Reliability Evaluation of Base-Metal-Electrode Multilayer Ceramic Capacitors for Potential Space Applications

    Science.gov (United States)

    Liu, David (Donhang); Sampson, Michael J.

    2011-01-01

    Base-metal-electrode (BME) ceramic capacitors are being investigated for possible use in high-reliability spacelevel applications. This paper focuses on how BME capacitors construction and microstructure affects their lifetime and reliability. Examination of the construction and microstructure of commercial off-the-shelf (COTS) BME capacitors reveals great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and 0.5 m, which is much less than that of most PME capacitors. BME capacitors can be fabricated with more internal electrode layers and thinner dielectric layers than PME capacitors because they have a fine-grained microstructure and do not shrink much during ceramic sintering. This makes it possible for BME capacitors to achieve a very high capacitance volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT). Most BME capacitors were found to fail with an early avalanche breakdown, followed by a regular dielectric wearout failure during the HALT test. When most of the early failures, characterized with avalanche breakdown, were removed, BME capacitors exhibited a minimum mean time-to-failure (MTTF) of more than 105 years at room temperature and rated voltage. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically around 12 for a number of BME capacitors with a rated voltage of 25V. This may suggest that the number of grains per dielectric layer is more critical than the

  5. Reliability analysis of operator's monitoring behavior in digital main control room of nuclear power plants and its application

    International Nuclear Information System (INIS)

    Zhang Li; Hu Hong; Li Pengcheng; Jiang Jianjun; Yi Cannan; Chen Qingqing

    2015-01-01

    In order to build a quantitative model to analyze operators' monitoring behavior reliability of digital main control room of nuclear power plants, based on the analysis of the design characteristics of digital main control room of a nuclear power plant and operator's monitoring behavior, and combining with operators' monitoring behavior process, monitoring behavior reliability was divided into three parts including information transfer reliability among screens, inside-screen information sampling reliability and information detection reliability. Quantitative calculation model of information transfer reliability among screens was established based on Senders's monitoring theory; the inside screen information sampling reliability model was established based on the allocation theory of attention resources; and considering the performance shaping factor causality, a fuzzy Bayesian method was presented to quantify information detection reliability and an example of application was given. The results show that the established model of monitoring behavior reliability gives an objective description for monitoring process, which can quantify the monitoring reliability and overcome the shortcomings of traditional methods. Therefore, it provides theoretical support for operator's monitoring behavior reliability analysis in digital main control room of nuclear power plants and improves the precision of human reliability analysis. (authors)

  6. Design for Reliability and Robustness Tool Platform for Power Electronic Systems – Study Case on Motor Drive Applications

    DEFF Research Database (Denmark)

    Vernica, Ionut; Wang, Huai; Blaabjerg, Frede

    2018-01-01

    conventional approach, mainly based on failure statistics from the field, the reliability evaluation of the power devices is still a challenging task. In order to address the given problem, a MATLAB based reliability assessment tool has been developed. The Design for Reliability and Robustness (DfR2) tool...... allows the user to easily investigate the reliability performance of the power electronic components (or sub-systems) under given input mission profiles and operating conditions. The main concept of the tool and its framework are introduced, highlighting the reliability assessment procedure for power...... semiconductor devices. Finally, a motor drive application is implemented and the reliability performance of the power devices is investigated with the help of the DfR2 tool, and the resulting reliability metrics are presented....

  7. Approaches of data combining for reliability assessments with taking into account the priority of data application

    International Nuclear Information System (INIS)

    Zelenyj, O.V.; Pecheritsa, A.V.

    2004-01-01

    Based upon the available experience on assessments of risk from Ukrainian NPP's operational events as well as on results of State review of PSA studies for pilot units it should be noted that historical information on domestic NPP's operation is not always available or used properly under implementation of mentioned activities. The several approaches for combining of available generic and specific information for reliability parameters assessment (taking into account the priority of data application) are briefly described in the article along with some recommendations how to apply those approaches

  8. Porting Your Applications and Saving Data In Cloud As Reliable Entity.

    Directory of Open Access Journals (Sweden)

    Cosmin Cătălin Olteanu

    2013-12-01

    Full Text Available The main purpose of the paper is to illustrate the importance of a reliable service in the meanings of cloud computing. The dynamics of an organization shows us that porting customs applications in cloud can makes the difference to be a successful company and to deliver what the client need just in time. Every employ should be able to access and enter data from everywhere. Remember that the office is moving along with the employ nowadays. But this concept comes with disadvantages of how safe is your data if you cannot control exactly, by your employs, those machines.

  9. A reliable, fast and low cost maximum power point tracker for photovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Enrique, J.M.; Andujar, J.M.; Bohorquez, M.A. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain)

    2010-01-15

    This work presents a new maximum power point tracker system for photovoltaic applications. The developed system is an analog version of the ''P and O-oriented'' algorithm. It maintains its main advantages: simplicity, reliability and easy practical implementation, and avoids its main disadvantages: inaccurateness and relatively slow response. Additionally, the developed system can be implemented in a practical way at a low cost, which means an added value. The system also shows an excellent behavior for very fast variables in incident radiation levels. (author)

  10. Design for Reliability and Robustness Tool Platform for Power Electronic Systems – Study Case on Motor Drive Applications

    DEFF Research Database (Denmark)

    Vernica, Ionut; Wang, Huai; Blaabjerg, Frede

    2018-01-01

    Because of the high cost of failure, the reliability performance of power semiconductor devices is becoming a more and more important and stringent factor in many energy conversion applications. Thus, the need for appropriate reliability analysis of the power electronics emerges. Due to its...

  11. Coupling finite elements and reliability methods - application to safety evaluation of pressurized water reactor vessels

    International Nuclear Information System (INIS)

    Pitner, P.; Venturini, V.

    1995-02-01

    When reliability studies are extended form deterministic calculations in mechanics, it is necessary to take into account input parameters variabilities which are linked to the different sources of uncertainty. Integrals must then be calculated to evaluate the failure risk. This can be performed either by simulation methods, or by approximations ones (FORM/SORM). Model in mechanics often require to perform calculation codes. These ones must then be coupled with the reliability calculations. Theses codes can involve large calculation times when they are invoked numerous times during simulations sequences or in complex iterative procedures. Response surface method gives an approximation of the real response from a reduced number of points for which the finite element code is run. Thus, when it is combined with FORM/SORM methods, a coupling can be carried out which gives results in a reasonable calculation time. An application of response surface method to mechanics reliability coupling for a mechanical model which calls for a finite element code is presented. It corresponds to a probabilistic fracture mechanics study of a pressurized water reactor vessel. (authors). 5 refs., 3 figs

  12. Application of a digital technique in evaluating the reliability of shade guides.

    Science.gov (United States)

    Cal, E; Sonugelen, M; Guneri, P; Kesercioglu, A; Kose, T

    2004-05-01

    There appears to be a need for a reliable method for quantification of tooth colour and analysis of shade. Therefore, the primary objective of this study was to show the applicability of graphic software in colour analysis and secondly to investigate the reliability of commercial shade guides produced by the same manufacturer, using this digital technique. After confirming the reliability and reproducibility of the digital method by using self-assessed coloured images, three shade guides of the same manufacturer were photographed in daylight and in studio environments with a digital camera and saved in tagged image file format (TIFF) format. Colour analysis of each photograph was performed using the Adobe Photoshop 4.0 graphic program. Luminosity, and red, green, blue (L and RGB) values of each shade tab of each shade guide were measured and the data were subjected to statistical analysis using the repeated measure Anova test. The L and RGB values of the images taken in daylight differed significantly from those of the images taken in studio environment (P < 0.05). In both environments, the luminosity and red values of the shade tabs were significantly different from each other (P < 0.05). It was concluded that, when the environmental conditions were kept constant, the Adobe Photoshop 4.0 colour analysis program could be used to analyse the colour of images. On the other hand, the results revealed that the accuracy of shade tabs widely being used in colour matching should be readdressed.

  13. Reliability Evaluation of Base-Metal-Electrode (BME) Multilayer Ceramic Capacitors for Space Applications

    Science.gov (United States)

    Liu, David (Donghang)

    2011-01-01

    This paper reports reliability evaluation of BME ceramic capacitors for possible high reliability space-level applications. The study is focused on the construction and microstructure of BME capacitors and their impacts on the capacitor life reliability. First, the examinations of the construction and microstructure of commercial-off-the-shelf (COTS) BME capacitors show great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and approximately 0.5 micrometers, which is much less than that of most PME capacitors. The primary reasons that a BME capacitor can be fabricated with more internal electrode layers and less dielectric layer thickness is that it has a fine-grained microstructure and does not shrink much during ceramic sintering. This results in the BME capacitors a very high volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT) and regular life testing as per MIL-PRF-123. Most BME capacitors were found to fail· with an early dielectric wearout, followed by a rapid wearout failure mode during the HALT test. When most of the early wearout failures were removed, BME capacitors exhibited a minimum mean time-to-failure of more than 10(exp 5) years. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically between 10 and 20. This may suggest that the number of grains per dielectric layer is more critical than the thickness itself for determining the rated voltage and the life

  14. V2X application-reliability analysis of data-rate and message-rate congestion control algorithms

    NARCIS (Netherlands)

    Math, C. Belagal; Li, H.; Heemstra de Groot, S.M.; Niemegeers, I.G.M.M.

    2017-01-01

    Intelligent Transportation Systems (ITS) require Vehicle-to-Everything (V2X) communication. In dense traffic, the communication channel may become congested, impairing the reliability of the ITS safety applications. Therefore, European Telecommunications Standard Institute (ETSI) demands

  15. Adapting Human Reliability Analysis from Nuclear Power to Oil and Gas Applications

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory

    2015-09-01

    ABSTRACT: Human reliability analysis (HRA), as currently used in risk assessments, largely derives its methods and guidance from application in the nuclear energy domain. While there are many similarities be-tween nuclear energy and other safety critical domains such as oil and gas, there remain clear differences. This paper provides an overview of HRA state of the practice in nuclear energy and then describes areas where refinements to the methods may be necessary to capture the operational context of oil and gas. Many key distinctions important to nuclear energy HRA such as Level 1 vs. Level 2 analysis may prove insignifi-cant for oil and gas applications. On the other hand, existing HRA methods may not be sensitive enough to factors like the extensive use of digital controls in oil and gas. This paper provides an overview of these con-siderations to assist in the adaptation of existing nuclear-centered HRA methods to the petroleum sector.

  16. In-plant application of industry experience to enhance human reliability

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Singh, A.

    1993-01-01

    This paper describes the way that modern data-base computer tools can enhance the ability to collect, organize, evaluate, and use industry experience. By combining the computer tools with knowledge from human reliability assessment tools, data, and frameworks, the data base can become a tool for collecting and assessing the lessons learned from past events. By integrating the data-base system with plant risk models, engineers can focus on those activities that can enhance over-all system reliability. The evaluation helps identify technology and tools to reduce human errors during operations and maintenance. Learning from both in-plant and industry experience can help enhance safety and reduce the cost of plant operations. Utility engineers currently assess events that occur in nuclear plants throughout the world for in-plant applicability. Established computer information networks, documents, bulletins, and other information sources provide a large number of event descriptions to help individual plants benefit from this industry experience. The activities for coordinating reviews of event descriptions from other plants for in-plant applications require substantial engineering time to collect, organize, evaluate, and apply. Data-base tools can help engineers efficiently handle and sort the data so that they can concentrate on understanding the importance of the event, developing cost-effective interventions, and communicating implementation plans for plant improvement. An Electric Power Research Institute human reliability project has developed a classification system with modern data-base software to help engineers efficiently process, assess, and apply information contained in the events to enhance plant operation. Plant-specific classification of industry experience provides a practical method for efficiently taking into account industry when planning maintenance activities and reviewing plant safety

  17. Validity and Reliability of 2 Goniometric Mobile Apps: Device, Application, and Examiner Factors.

    Science.gov (United States)

    Wellmon, Robert H; Gulick, Dawn T; Paterson, Mark L; Gulick, Colleen N

    2016-12-01

    Smartphones are being used in a variety of practice settings to measure joint range of motion (ROM). A number of factors can affect the validity of the measurements generated. However, there are no studies examining smartphone-based goniometer applications focusing on measurement variability and error arising from the electromechanical properties of the device being used. To examine the concurrent validity and interrater reliability of 2 goniometric mobile applications (Goniometer Records, Goniometer Pro), an inclinometer, and a universal goniometer (UG). Nonexperimental, descriptive validation study. University laboratory. 3 physical therapists having an average of 25 y of experience. Three standardized angles (acute, right, obtuse) were constructed to replicate the movement of a hinge joint in the human body. Angular changes were measured and compared across 3 raters who used 3 different devices (UG, inclinometer, and 2 goniometric apps installed on 3 different smartphones: Apple iPhone 5, LG Android, and Samsung SIII Android). Intraclass correlation coefficients (ICCs) and Bland-Altman plots were used to examine interrater reliability and concurrent validity. Interrater reliability for each of the smartphone apps, inclinometer and UG were excellent (ICC = .995-1.000). Concurrent validity was also good (ICC = .998-.999). Based on the Bland-Altman plots, the means of the differences between the devices were low (range = -0.4° to 1.2°). This study identifies the error inherent in measurement that is independent of patient factors and due to the smartphone, the installed apps, and examiner skill. Less than 2° of measurement variability was attributable to those factors alone. The data suggest that 3 smartphones with the 2 installed apps are a viable substitute for using a UG or an inclinometer when measuring angular changes that typically occur when examining ROM and demonstrate the capacity of multiple examiners to accurately use smartphone-based goniometers.

  18. Testing viability of cross subsidy using time-variant price elasticities of industrial demand for electricity: Indian experience

    International Nuclear Information System (INIS)

    Chattopadhyay, Pradip

    2007-01-01

    Indian electric tariffs are characterized by very high rates for industrial and commercial classes to permit subsidized electric consumption by residential and agricultural customers. We investigate the viability of this policy using monthly data for 1997-2003 on electric consumption by a few large industrial customers under the aegis of a small distribution company in the state of Uttar Pradesh. For a given price/cost ratio, it can be shown that if the cross-subsidizing class' electricity demand is sufficiently elastic, increasing the class' rates fail to recover incremental cross-subsidy necessary to support additional revenues for subsidized classes. This suboptimality is tested by individually estimating time-variant price-elasticities of demand for these industrial customers using Box-Cox and linear regressions. We find that at least for some of these customers, cross-subsidy was suboptimal prior to as late as October 2001, when rates were changed following reforms

  19. Testing viability of cross subsidy using time-variant price elasticities of industrial demand for electricity: Indian experience

    Energy Technology Data Exchange (ETDEWEB)

    Chattopadhyay, Pradip [New Hampshire Public Utilities Commission, 21 South Fruit Street, Suite 10, Concord NH 03301 (United States)]. E-mail: pradip.chattopadhyay@puc.nh.gov

    2007-01-15

    Indian electric tariffs are characterized by very high rates for industrial and commercial classes to permit subsidized electric consumption by residential and agricultural customers. We investigate the viability of this policy using monthly data for 1997-2003 on electric consumption by a few large industrial customers under the aegis of a small distribution company in the state of Uttar Pradesh. For a given price/cost ratio, it can be shown that if the cross-subsidizing class' electricity demand is sufficiently elastic, increasing the class' rates fail to recover incremental cross-subsidy necessary to support additional revenues for subsidized classes. This suboptimality is tested by individually estimating time-variant price-elasticities of demand for these industrial customers using Box-Cox and linear regressions. We find that at least for some of these customers, cross-subsidy was suboptimal prior to as late as October 2001, when rates were changed following reforms.

  20. Sub-nanosecond jitter, repetitive impulse generators for high reliability applications

    International Nuclear Information System (INIS)

    Krausse, G.J.; Sarjeant, W.J.

    1981-01-01

    Low jitter, high reliability impulse generator development has recently become of ever increasing importance for developing nuclear physics and weapons applications. The research and development of very low jitter (< 30 ps), multikilovolt generators for high reliability, minimum maintenance trigger applications utilizing a new class of high-pressure tetrode thyratrons now commercially available are described. The overall system design philosophy is described followed by a detailed analysis of the subsystem component elements. A multi-variable experimental analysis of this new tetrode thyratron was undertaken, in a low-inductance configuration, as a function of externally available parameters. For specific thyratron trigger conditions, rise times of 18 ns into 6.0-Ω loads were achieved at jitters as low as 24 ps. Using this database, an integrated trigger generator system with solid-state front-end is described in some detail. The generator was developed to serve as the Master Trigger Generator for a large neutrino detector installation at the Los Alamos Meson Physics Facility

  1. The reliability of vertical jump tests between the Vertec and My Jump phone application.

    Science.gov (United States)

    Yingling, Vanessa R; Castro, Dimitri A; Duong, Justin T; Malpartida, Fiorella J; Usher, Justin R; O, Jenny

    2018-01-01

    The vertical jump is used to estimate sports performance capabilities and physical fitness in children, elderly, non-athletic and injured individuals. Different jump techniques and measurement tools are available to assess vertical jump height and peak power; however, their use is limited by access to laboratory settings, excessive cost and/or time constraints thus making these tools oftentimes unsuitable for field assessment. A popular field test uses the Vertec and the Sargent vertical jump with countermovement; however, new low cost, easy to use tools are becoming available, including the My Jump iOS mobile application (app). The purpose of this study was to assess the reliability of the My Jump relative to values obtained by the Vertec for the Sargent stand and reach vertical jump (VJ) test. One hundred and thirty-five healthy participants aged 18-39 years (94 males, 41 females) completed three maximal Sargent VJ with countermovement that were simultaneously measured using the Vertec and the My Jump . Jump heights were quantified for each jump and peak power was calculated using the Sayers equation. Four separate ICC estimates and their 95% confidence intervals were used to assess reliability. Two analyses (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, consistency, two-way mixed-effects model, while two others (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, absolute agreement, two-way mixed-effects model. Moderate to excellent reliability relative to the degree of consistency between the Vertec and My Jump values was found for jump height (ICC = 0.813; 95% CI [0.747-0.863]) and calculated peak power (ICC = 0.926; 95% CI [0.897-0.947]). However, poor to good reliability relative to absolute agreement for VJ height (ICC = 0.665; 95% CI [0.050-0.859]) and poor to excellent reliability relative to absolute agreement for peak power

  2. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  3. Application of reliability centred maintenance to optimize operation and maintenance in nuclear power plants

    International Nuclear Information System (INIS)

    2007-05-01

    In order to increase Member States capabilities in utilizing good engineering and management practices the Agency has developed a series of Technical Documents (TECDOCs) to describe best practices and members experience in the application of them. This TECDOC describes the concept of Reliability Centred Maintenance (RCM) which is the term used to describe a systematic approach to the evaluation, design and development of cost effective maintenance programmes for plant and equipment. The concept has been in existence for over 25 years originating in the civil aviation sector. This TECDOC supplements previous IAEA publications on the subject and seeks to reflect members experience in the application of the principles involved. The process focuses on the functionality of the plant and equipment and the critical failure mechanisms that could result in the loss of functionality. When employed effectively the process can result in the elimination of unnecessary maintenance activities and the identification and introduction of measures to address deficiencies in the maintenance programme. Overall the process can result in higher levels of reliability for the plant and equipment at reduced cost and demands on finite maintenance resources. The application of the process requires interaction between the operators and the maintenance practitioners which is often lacking in traditional maintenance programmes. The imposition of this discipline produces the added benefit of improved information flows between the key players in plant and equipment management with the result that maintenance activities and operational practices are better informed. This publication was produced within IAEA programme on nuclear power plants operating performance and life cycle management

  4. Tariff design for communication-capable metering systems in conjunction with time-variant electricity consumption rates; Gestaltung von Tarifen fuer kommunikationsfaehige Messsysteme im Verbund mit zeitvariablen Stromtarifen. Eine empirische Analyse von Praeferenzen privater Stromkunden in Deutschland

    Energy Technology Data Exchange (ETDEWEB)

    Gerpott, Torsten J.; Paukert, Mathias [Duisburg-Essen Univ., Duisburg (Germany). Lehrstuhl Unternehmens- und Technologieplanung, Schwerpunkt Telekommunikationswirtschaft

    2013-06-15

    In Germany too, communication-capable electricity metering systems (CMS) together with time-based differentiation of kWh-rates for energy consumption are increasingly proliferated among household customers. Nevertheless, empirical evidence with respect to preferences of members of this customer group for the design of CMS tariff elements and of time-variant electricity consumption rates is still scarce. The present study captures such preferences by means of conjoint analysis of data obtained in an online survey of 754 German-speaking adults. Examined CMS tariff elements are a one-off installation fee and monthly recurring use charges. The studied characteristics of time-based rates are the number of time/tariff blocks, the maximum spread between kWh-rates for different time windows and the adaptability/predictability of kWh-rates. Most respondents judged multidimensional CMS and electricity consumption tariff offerings mainly in light of the CMS tariff characteristics. The vast majority of the participants perceived kWh-rates, which may change with a minimum lead time of one day as reducing the benefit of CMS and consumption tariff bundles. Tariff preferences on the one hand were only rarely significantly related to customers' socio-demographic and electricity procurement characteristics as well as their CMS-related expectations/assessments on the other. The willingness to accept CMS-related one-off installation and recurring service charges as well as the propensity to opt for time-dependent electricity consumption tariff variants differing clearly from non-differentiated electricity price schemes appear to be positively affected by customers' practical application experience with CMS and time-variant electricity consumption rates. Conclusions are drawn for energy suppliers seeking to propagate CMS-based time-variant tariffs among household customers in Germany and for future scholarly research. (orig.)

  5. Go-flow: a reliability analysis methodology applicable to piping system

    International Nuclear Information System (INIS)

    Matsuoka, T.; Kobayashi, M.

    1985-01-01

    Since the completion of the Reactor Safety Study, the use of probabilistic risk assessment technique has been becoming more widespread in the nuclear community. Several analytical methods are used for the reliability analysis of nuclear power plants. The GO methodology is one of these methods. Using the GO methodology, the authors performed a reliability analysis of the emergency decay heat removal system of the nuclear ship Mutsu, in order to examine its applicability to piping systems. By this analysis, the authors have found out some disadvantages of the GO methodology. In the GO methodology, the signal is on-to-off or off-to-on signal, therefore the GO finds out the time point at which the state of a system changes, and can not treat a system which state changes as off-on-off. Several computer runs are required to obtain the time dependent failure probability of a system. In order to overcome these disadvantages, the authors propose a new analytical methodology: GO-FLOW. In GO-FLOW, the modeling method (chart) and the calculation procedure are similar to those in the GO methodology, but the meaning of signal and time point, and the definitions of operators are essentially different. In the paper, the GO-FLOW methodology is explained and two examples of the analysis by GO-FLOW are given

  6. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    Energy Technology Data Exchange (ETDEWEB)

    Emery, John M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Coffin, Peter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Robbins, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Field, Richard V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jeremy Yoo, Yung Suk [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kacher, Josh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins with a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.

  7. Application of reliability centered maintenance for nuclear power station in Japan

    International Nuclear Information System (INIS)

    Kumano, Haruyuki; Honda, Hironobu.

    1990-01-01

    The reliability centered maintenance (RCM) method has been widely used with good results in aviation companies in the U.S. to ensure positive preventive maintenance and management. In addition, the Electric Power Research Institute has been making studies and tests in an effort to apply the RCM method to nuclear power plants. The present report shows and discusses some results of a preliminary study aimed at the introduction of the RCM method to nuclear power plants in Japan. The history of the development and application of RCM is outlined first, and the procedure of its implementation is then described and discussed. The procedure consists of five major steps: collection of data, identification of system components, analysis of the functions of the system, selection of required tasks for preventive management, and packaging. Some actual examples of the application of RCM to nuclear power plants in the U.S. are described. And finally, the report discusses some major problems to be solved to permit the application of RCM to nuclear power plants in Japan. (N.K.)

  8. Reliability of the MDi Psoriasis® Application to Aid Therapeutic Decision-Making in Psoriasis.

    Science.gov (United States)

    Moreno-Ramírez, D; Herrerías-Esteban, J M; Ojeda-Vila, T; Carrascosa, J M; Carretero, G; de la Cueva, P; Ferrándiz, C; Galán, M; Rivera, R; Rodríguez-Fernández, L; Ruiz-Villaverde, R; Ferrándiz, L

    2017-09-01

    Therapeutic decisions in psoriasis are influenced by disease factors (e.g., severity or location), comorbidity, and demographic and clinical features. We aimed to assess the reliability of a mobile telephone application (MDi-Psoriasis) designed to help the dermatologist make decisions on how to treat patients with moderate to severe psoriasis. We analyzed interobserver agreement between the advice given by an expert panel and the recommendations of the MDi-Psoriasis application in 10 complex cases of moderate to severe psoriasis. The experts were asked their opinion on which treatments were most appropriate, possible, or inappropriate. Data from the same 10 cases were entered into the MDi-Psoriasis application. Agreement was analyzed in 3 ways: paired interobserver concordance (Cohen's κ), multiple interobserver concordance (Fleiss's κ), and percent agreement between recommendations. The mean percent agreement between the total of 1210 observations was 51.3% (95% CI, 48.5-54.1%). Cohen's κ statistic was 0.29 and Fleiss's κ was 0.28. Mean agreement between pairs of human observers only, excluding the MDi-Psoriasis recommendations, was 50.5% (95% CI, 47.6-53.5%). Paired agreement between the recommendations of the MDi-Psoriasis tool and the majority opinion of the expert panel (Cohen's κ) was 0.44 (68.2% agreement). The MDi-Psoriasis tool can generate recommendations that are comparable to those of experts in psoriasis. Copyright © 2017 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Application of structural reliability and risk assessment to life prediction and life extension decision making

    International Nuclear Information System (INIS)

    Meyer, T.A.; Balkey, K.R.; Bishop, B.A.

    1987-01-01

    There can be numerous uncertainties involved in performing component life assessments. In addition, sufficient data may be unavailable to make a useful life prediction. Structural Reliability and Risk Assessment (SRRA) is primarily an analytical methodology or tool that quantifies the impact of uncertainties on the structural life of plant components and can address the lack of data in component life prediction. As a prelude to discussing the technical aspects of SRRA, a brief review of general component life prediction methods is first made so as to better develop an understanding of the role of SRRA in such evaluations. SRRA is then presented as it is applied in component life evaluations with example applications being discussed for both nuclear and non-nuclear components

  10. Improved mechanical reliability of MEMS electret based vibration energy harvesters for automotive applications

    International Nuclear Information System (INIS)

    Renaud, M; Goedbloed, M; De Nooijer, C; Van Schaijk, R; Fujita, T

    2014-01-01

    Current commercial wireless tire pressure monitoring systems (TPMS) require a battery as electrical power source. The battery limits the lifetime of the TPMS. This limit can be circumvented by replacing the battery by a vibration energy harvester. Autonomous wireless TPMS powered by MEMS electret based vibration energy harvester have been demonstrated. A remaining technical challenge to attain the grade of commercial product with these autonomous TPMS is the mechanical reliability of the MEMS harvester. It should survive the harsh conditions imposed by the tire environment, particularly in terms of mechanical shocks. As shown in this article, our first generation of harvesters has a shock resilience of 400 g, which is far from being sufficient for the targeted application. In order to improve this aspect, several types of shock absorbing structures are investigated. With the best proposed solution, the shock resilience of the harvesters is brought above 2500 g

  11. Reliability and applications of statistical methods based on oligonucleotide frequencies in bacterial and archaeal genomes

    DEFF Research Database (Denmark)

    Bohlin, J; Skjerve, E; Ussery, David

    2008-01-01

    with here are mainly used to examine similarities between archaeal and bacterial DNA from different genomes. These methods compare observed genomic frequencies of fixed-sized oligonucleotides with expected values, which can be determined by genomic nucleotide content, smaller oligonucleotide frequencies......, or be based on specific statistical distributions. Advantages with these statistical methods include measurements of phylogenetic relationship with relatively small pieces of DNA sampled from almost anywhere within genomes, detection of foreign/conserved DNA, and homology searches. Our aim was to explore...... the reliability and best suited applications for some popular methods, which include relative oligonucleotide frequencies (ROF), di- to hexanucleotide zero'th order Markov methods (ZOM) and 2.order Markov chain Method (MCM). Tests were performed on distant homology searches with large DNA sequences, detection...

  12. Computing interval-valued statistical characteristics: What is the stumbling block for reliability applications?

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, V.G.

    2009-01-01

    The application of interval-valued statistical models is often hindered by the rapid growth in imprecision that occurs when intervals are propagated through models. Is this deficiency inherent in the models? If so, what is the underlying cause of imprecision in mathematical terms? What kind...... of additional information can be incorporated to make the bounds tighter? The present paper gives an account of the source of this imprecision that prevents interval-valued statistical models from being widely applied. Firstly, the mathematical approach to building interval-valued models (discrete...... and continuous) is delineated. Secondly, a degree of imprecision is demonstrated on some simple reliability models. Thirdly, the root mathematical cause of sizeable imprecision is elucidated and, finally, a method of making the intervals tighter is described. A number of examples are given throughout the paper....

  13. A note on the application of probabilistic structural reliability methodology to nuclear power plants

    International Nuclear Information System (INIS)

    Maurer, H.A.

    1978-01-01

    The interest shown in the general prospects of primary energy in European countries prompted description of the actual European situation. Explanation of the needs for installation of nuclear power plants in most contries of the European Communities are given. Activities of the Commission of the European Communities to initiate a progressive harmonization of already existing European criteria, codes and complementary requirements in order to improve the structural reliability of components and systems of nuclear power plants are summarized. Finally, the applicability of a probabilistic safety analysis to facilitate decision-making as to safety by defining acceptable target and limit values, coupled with a subjective estimate as it is applied in the safety analyses performed in most European countries, is demonstrated. (Auth.)

  14. Solving advanced multi-objective robust designs by means of multiple objective evolutionary algorithms (MOEA): A reliability application

    Energy Technology Data Exchange (ETDEWEB)

    Salazar A, Daniel E. [Division de Computacion Evolutiva (CEANI), Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Universidad de Las Palmas de Gran Canaria. Canary Islands (Spain)]. E-mail: danielsalazaraponte@gmail.com; Rocco S, Claudio M. [Universidad Central de Venezuela, Facultad de Ingenieria, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve

    2007-06-15

    This paper extends the approach proposed by the second author in [Rocco et al. Robust design using a hybrid-cellular-evolutionary and interval-arithmetic approach: a reliability application. In: Tarantola S, Saltelli A, editors. SAMO 2001: Methodological advances and useful applications of sensitivity analysis. Reliab Eng Syst Saf 2003;79(2):149-59 [special issue

  15. Application case study of AP1000 automatic depressurization system (ADS) for reliability evaluation by GO-FLOW methodology

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, Muhammad, E-mail: hashimsajid@yahoo.com; Hidekazu, Yoshikawa, E-mail: yosikawa@kib.biglobe.ne.jp; Takeshi, Matsuoka, E-mail: mats@cc.utsunomiya-u.ac.jp; Ming, Yang, E-mail: myang.heu@gmail.com

    2014-10-15

    Highlights: • Discussion on reasons why AP1000 equipped with ADS system comparatively to PWR. • Clarification of full and partial depressurization of reactor coolant system by ADS system. • Application case study of four stages ADS system for reliability evaluation in LBLOCA. • GO-FLOW tool is capable to evaluate dynamic reliability of passive safety systems. • Calculated ADS reliability result significantly increased dynamic reliability of PXS. - Abstract: AP1000 nuclear power plant (NPP) utilized passive means for the safety systems to ensure its safety in events of transient or severe accidents. One of the unique safety systems of AP1000 to be compared with conventional PWR is the “four stages Automatic Depressurization System (ADS)”, and ADS system originally works as an active safety system. In the present study, authors first discussed the reasons of why four stages ADS system is added in AP1000 plant to be compared with conventional PWR in the aspect of reliability. And then explained the full and partial depressurization of RCS system by four stages ADS in events of transient and loss of coolant accidents (LOCAs). Lastly, the application case study of four stages ADS system of AP1000 has been conducted in the aspect of reliability evaluation of ADS system under postulated conditions of full RCS depressurization during large break loss of a coolant accident (LBLOCA) in one of the RCS cold legs. In this case study, the reliability evaluation is made by GO-FLOW methodology to determinate the influence of ADS system in dynamic reliability of passive core cooling system (PXS) of AP1000, i.e. what will happen if ADS system fails or successfully actuate. The GO-FLOW is success-oriented reliability analysis tool and is capable to evaluating the systems reliability/unavailability alternatively to Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) tools. Under these specific conditions of LBLOCA, the GO-FLOW calculated reliability results indicated

  16. Optimal reliability design for over-actuated systems based on the MIT rule: Application to an octocopter helicopter testbed

    International Nuclear Information System (INIS)

    Chamseddine, Abbas; Theilliol, Didier; Sadeghzadeh, Iman; Zhang, Youmin; Weber, Philippe

    2014-01-01

    This paper addresses the problem of optimal reliability in over-actuated systems. Overloading an actuator decreases its overall lifetime and reduces its average performance over a long time. Therefore, performance and reliability are two conflicting requirements. While appropriate reliability is related to average loads, good performance is related to fast response and sufficient loads generated by actuators. Actuator redundancy allows us to address both performance and reliability at the same time by properly allocating desired loads among redundant actuators. The main contribution of this paper is the on-line optimization of the overall plant reliability according to performance objective using an MIT (Massachusetts Institute of Technology) rule-based method. The effectiveness of the proposed method is illustrated through an experimental application to an octocopter helicopter testbed

  17. Characteristics and application study of AP1000 NPPs equipment reliability classification method

    International Nuclear Information System (INIS)

    Guan Gao

    2013-01-01

    AP1000 nuclear power plant applies an integrated approach to establish equipment reliability classification, which includes probabilistic risk assessment technique, maintenance rule administrative, power production reliability classification and functional equipment group bounding method, and eventually classify equipment reliability into 4 levels. This classification process and result are very different from classical RCM and streamlined RCM. It studied the characteristic of AP1000 equipment reliability classification approach, considered that equipment reliability classification should effectively support maintenance strategy development and work process control, recommended to use a combined RCM method to establish the future equipment reliability program of AP1000 nuclear power plants. (authors)

  18. Reliability assessment platform for the power semiconductor devices - Study case on 3-phase grid-connected inverter application

    DEFF Research Database (Denmark)

    Vernica, Ionut; Ma, Ke; Blaabjerg, Frede

    2017-01-01

    provide valuable reliability information based on given mission profiles and system specification is first developed and its main concept is presented. In order to facilitate the test and access to the loading and lifetime information of the power devices, a novel mission profile based stress emulator...... experimental setup is proposed and designed. The link between the stress emulator setup and the reliability tool software is highlighted. Finally, the reliability assessment platform is demonstrated on a 3-phase grid-connected inverter application study case....

  19. Investigation of low glass transition temperature on COTS PEM's reliability for space applications

    Science.gov (United States)

    Sandor, M.; Agarwal, S.; Peters, D.; Cooper, M. S.

    2003-01-01

    Plastic Encapsulated Microelectronics (PEM) reliability is affected by many factors. Glass transition temperature (Tg) is one such factor. In this presentation issues relating to PEM reliability and the effect of low glass transition temperature epoxy mold compounds are presented.

  20. Engineering reliability in design phase: An application to AP-600 reactor passive safety system

    International Nuclear Information System (INIS)

    Majumdr, D.; Siahpush, A.S.; Hills, S.W.

    1992-01-01

    A computerized reliability enhancement methodology is described that can be used at the engineering design phase to help the designer achieve a desired reliability of the system. It can take into account the limitation imposed by a constraint such as budget, space, or weight. If the desired reliability of the system is known, it can determine the minimum reliabilities of the components, or how many redundant components are needed to achieve the desired reliability. This methodology is applied to examine the Automatic Depressurization System (ADS) of the new passively safe AP-600 reactor. The safety goal of a nuclear reactor dictates a certain reliability level of its components. It is found that a series parallel valve configuration instead of the parallel-series configuration of the four valves in one stage would improve the reliability of the ADS. Other valve characteristics and arrangements are explored to examine different reliability options for the system

  1. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  2. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  3. Key attributes of the SAPHIRE risk and reliability analysis software for risk-informed probabilistic applications

    International Nuclear Information System (INIS)

    Smith, Curtis; Knudsen, James; Kvarfordt, Kellie; Wood, Ted

    2008-01-01

    The Idaho National Laboratory is a primary developer of probabilistic risk and reliability analysis (PRRA) tools, dating back over 35 years. Evolving from mainframe-based software, the current state-of-the-practice has led to the creation of the SAPHIRE software. Currently, agencies such as the Nuclear Regulatory Commission, the National Aeronautics and Aerospace Agency, the Department of Energy, and the Department of Defense use version 7 of the SAPHIRE software for many of their risk-informed activities. In order to better understand and appreciate the power of software as part of risk-informed applications, we need to recall that our current analysis methods and solution methods have built upon pioneering work done 30-40 years ago. We contrast this work with the current capabilities in the SAPHIRE analysis package. As part of this discussion, we provide information for both the typical features and special analysis capabilities, which are available. We also present the application and results typically found with state-of-the-practice PRRA models. By providing both a high-level and detailed look at the SAPHIRE software, we give a snapshot in time for the current use of software tools in a risk-informed decision arena

  4. An application of modulated poisson processes to the reliability analysis of repairable systems

    Energy Technology Data Exchange (ETDEWEB)

    Saldanha, Pedro L.C. [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coordenacao de Reatores]. E-mail: saldanha@cnen.gov.br; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: frutuoso@con.ufrj.br; Noriega, Hector C. [Universidad Austral de Chile (UACh), Valdivia (Chile). Faculdad de Ciencias de la Ingeniaria]. E-mail: hnoriega@uach.cl

    2005-07-01

    This paper discusses the application of the modulated power law process (MPLP) model to the rate of occurrence of failures of active repairable systems in reliability engineering. Traditionally, two ways of modeling repairable systems, in what concerns maintenance policies, are: a pessimistic approach (non-homogeneous process - NHPP), and a very optimistic approach (renewal processes - RP). It is important to build a generalized model that might consider characteristics and properties both of the NHPP and of the RP models as particular cases. In practice, by considering the pattern of times between failures, the MPLP appears to be more realistic to represent the occurrence of failures of repairable systems in order to define whether they can be modeled by a homogeneous or a non-homogeneous process. The study has shown that the model can be used to make decisions concerning the evaluation of the qualified life of plant equipment. By controlling and monitoring two of the three parameters of the MPLP model during the equipment operation, it is possible to check whether and how the equipment is following the basis of its qualification process, and so identify how the effects of time, degradation and operation modes are influencing the equipment performance. The discussion is illustrated by an application to the service water pumps of a typical PWR plant. (author)

  5. Application of reliability based design concepts to transmission line structure foundations. Part 2

    International Nuclear Information System (INIS)

    DiGioia, A.M. Jr.; Rojas-Gonzalez, L.F.

    1991-01-01

    The application of reliability based design (RBD) methods to transmission line structure foundations has developed somewhat more slowly than that for the other structural components in line systems. In a previous paper, a procedure was proposed for the design of transmission line structures foundations using a probability based load and resistance factor design (LRFD) format. This procedure involved the determination of a foundation strength factor, φ F , which was used as a multiplier of the calculated nominal design strength to estimate the five percent exclusion limit strength required in the calculated nominal design strength to estimate the five percent exclusion limit strength required in the LRFD equation. Statistical analyses of results from full-scale load tests were used to obtain φ F values applicable to various nominal design strength equations and for drilled shafts subjected to uplift loads. These results clearly illustrated the significant economic benefits of conducting more detailed subsurface investigations for the design of transmission line structure foundations. A design example was also presented. In this paper the proposed procedure is extended to laterally load drilled shafts

  6. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  7. Human reliability analysis of errors of commission: a review of methods and applications

    International Nuclear Information System (INIS)

    Reer, B.

    2007-06-01

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  8. Reliable, Low-Cost, Low-Weight, Non-Hermetic Coating for MCM Applications

    Science.gov (United States)

    Jones, Eric W.; Licari, James J.

    2000-01-01

    Through an Air Force Research Laboratory sponsored STM program, reliable, low-cost, low-weight, non-hermetic coatings for multi-chip-module(MCK applications were developed. Using the combination of Sandia Laboratory ATC-01 test chips, AvanTeco's moisture sensor chips(MSC's), and silicon slices, we have shown that organic and organic/inorganic overcoatings are reliable and practical non-hermetic moisture and oxidation barriers. The use of the MSC and unpassivated ATC-01 test chips provided rapid test results and comparison of moisture barrier quality of the overcoatings. The organic coatings studied were Parylene and Cyclotene. The inorganic coatings were Al2O3 and SiO2. The choice of coating(s) is dependent on the environment that the device(s) will be exposed to. We have defined four(4) classes of environments: Class I(moderate temperature/moderate humidity). Class H(high temperature/moderate humidity). Class III(moderate temperature/high humidity). Class IV(high temperature/high humidity). By subjecting the components to adhesion, FTIR, temperature-humidity(TH), pressure cooker(PCT), and electrical tests, we have determined that it is possible to reduce failures 50-70% for organic/inorganic coated components compared to organic coated components. All materials and equipment used are readily available commercially or are standard in most semiconductor fabrication lines. It is estimated that production cost for the developed technology would range from $1-10/module, compared to $20-200 for hermetically sealed packages.

  9. Bayesian belief networks for human reliability analysis: A review of applications and gaps

    International Nuclear Information System (INIS)

    Mkrtchyan, L.; Podofillini, L.; Dang, V.N.

    2015-01-01

    The use of Bayesian Belief Networks (BBNs) in risk analysis (and in particular Human Reliability Analysis, HRA) is fostered by a number of features, attractive in fields with shortage of data and consequent reliance on subjective judgments: the intuitive graphical representation, the possibility of combining diverse sources of information, the use the probabilistic framework to characterize uncertainties. In HRA, BBN applications are steadily increasing, each emphasizing a different BBN feature or a different HRA aspect to improve. This paper aims at a critical review of these features as well as at suggesting research needs. Five groups of BBN applications are analysed: modelling of organizational factors, analysis of the relationships among failure influencing factors, BBN-based extensions of existing HRA methods, dependency assessment among human failure events, assessment of situation awareness. Further, the paper analyses the process for building BBNs and in particular how expert judgment is used in the assessment of the BBN conditional probability distributions. The gaps identified in the review suggest the need for establishing more systematic frameworks to integrate the different sources of information relevant for HRA (cognitive models, empirical data, and expert judgment) and to investigate algorithms to avoid elicitation of many relationships via expert judgment. - Highlights: • We analyze BBN uses for HRA applications; but some conclusions can be generalized. • Special review focus on BBN building approaches, key for model acceptance. • Gaps relate to the transparency of the BBN building and quantification phases. • Need for more systematic framework to integrate different sources of information. • Need of ways to avoid elicitation of many relationships via expert judgment

  10. Reliability of hybrid photovoltaic DC micro-grid systems for emergency shelters and other applications

    Science.gov (United States)

    Dhere, Neelkanth G.; Schleith, Susan

    2014-10-01

    Improvement of energy efficiency in the SunSmart Schools Emergency Shelters requires new methods for optimizing the energy consumption within the shelters. One major limitation in current systems is the requirement of converting direct current (DC) power generated from the PV array into alternating current (AC) power which is distributed throughout the shelters. Oftentimes, this AC power is then converted back to DC to run certain appliances throughout the shelters resulting in a significant waste of energy due to DC to AC and then again AC to DC conversion. This paper seeks to extract the maximum value out of PV systems by directly powering essential load components within the shelters that already run on DC power without the use of an inverter and above all to make the system reliable and durable. Furthermore, additional DC applications such as LED lighting, televisions, computers and fans operated with DC brushless motors will be installed as replacements to traditional devices in order to improve efficiency and reduce energy consumption. Cost of energy storage technologies continue to decline as new technologies scale up and new incentives are put in place. This will provide a cost effective way to stabilize the energy generation of a PV system as well as to provide continuous energy during night hours. It is planned to develop a pilot program of an integrated system that can provide uninterrupted DC power to essential base load appliances (heating, cooling, lighting, etc.) at the Florida Solar Energy Center (FSEC) command center for disaster management. PV arrays are proposed to be installed on energy efficient test houses at FSEC as well as at private homes having PV arrays where the owners volunteer to participate in the program. It is also planned to monitor the performance of the PV arrays and functioning of the appliances with the aim to improve their reliability and durability. After a successful demonstration of the hybrid DC microgrid based emergency

  11. Application of genetic algorithm for reliability allocation in nuclear power plants

    International Nuclear Information System (INIS)

    Yang, Joon-Eon; Hwang, Mee-Jung; Sung, Tae-Yong; Jin, Youngho

    1999-01-01

    Reliability allocation is an optimization process of minimizing the total plant costs subject to the overall plant safety goal constraints. Reliability allocation was applied to determine the reliability characteristics of reactor systems, subsystems, major components and plant procedures that are consistent with a set of top-level performance goals; the core melt frequency, acute fatalities and latent fatalities. Reliability allocation can be performed to improve the design, operation and safety of new and/or existing nuclear power plants. Reliability allocation is a kind of a difficult multi-objective optimization problem as well as a global optimization problem. The genetic algorithm, known as one of the most powerful tools for most optimization problems, is applied to the reliability allocation problem of a typical pressurized water reactor in this article. One of the main problems of reliability allocation is defining realistic objective functions. Hence, in order to optimize the reliability of the system, the cost for improving and/or degrading the reliability of the system should be included in the reliability allocation process. We used techniques derived from the value impact analysis to define the realistic objective function in this article

  12. Human Reliability Analysis. Applicability of the HRA-concept in maintenance shutdown

    International Nuclear Information System (INIS)

    Obenius, Aino

    2007-08-01

    work tasks. Errors and mistakes during this plant operating state may have severe consequences, both on the immediate work, as well as on the future power production. The human influence on the technical system is of great importance when analysing the LPSD condition. This should also affect the basis and performance of the analysis, to make as a realistic analysis as possible. When analysing human operation during LPSD, a holistic perspective should be used. A way to take the human abilities and performance variability into consideration is important. The study of performed analysis of human reliability for the LPSD condition shows, that the normative and/or descriptive approach and the linear cause-effect model are used. The main objective of HRAs performed within SPSAs is the quantification of human interaction and error frequency. Modelling of human behaviour in complex, sociotechnical systems differs in theory and practice. A reason may be that models as the one for functional resonance, not yet are applicable for practising analysts, due to a lack of well tried methods and the fact that analysis of the LPSD condition is performed in the PSA concept, which defines the type of results sought from the HRA, i.e. probabilities for human error. LPSD analysis methods need to be further evaluated, validated and developed. The basis for the analysis should, instead of PSA, be a holistic analysis according to how Man, Technology and Organization affect the system and plant safety. To achieve this, further activities could be to perform an in-depth study of existing analysis of the LPSD condition, to develop specifications of requirement for LPSD analysis, to further validate the HRA work process as well as to further develop practically applicable methods for human performance and variability analysis in sociotechnical systems

  13. Application of REPAS Methodology to Assess the Reliability of Passive Safety Systems

    Directory of Open Access Journals (Sweden)

    Franco Pierro

    2009-01-01

    Full Text Available The paper deals with the presentation of the Reliability Evaluation of Passive Safety System (REPAS methodology developed by University of Pisa. The general objective of the REPAS is to characterize in an analytical way the performance of a passive system in order to increase the confidence toward its operation and to compare the performances of active and passive systems and the performances of different passive systems. The REPAS can be used in the design of the passive safety systems to assess their goodness and to optimize their costs. It may also provide numerical values that can be used in more complex safety assessment studies and it can be seen as a support to Probabilistic Safety Analysis studies. With regard to this, some examples in the application of the methodology are reported in the paper. A best-estimate thermal-hydraulic code, RELAP5, has been used to support the analyses and to model the selected systems. Probability distributions have been assigned to the uncertain input parameters through engineering judgment. Monte Carlo method has been used to propagate uncertainties and Wilks' formula has been taken into account to select sample size. Failure criterions are defined in terms of nonfulfillment of the defined design targets.

  14. Human factors perspective on the reliability of NDT in nuclear applications

    International Nuclear Information System (INIS)

    Bertovic, Marija; Mueller, Christina; Fahlbruch, Babette

    2013-01-01

    A series of research studies have been conducted over the course of five years venturing into the fields of in-service inspections (ISI) in nuclear power plants (NPPs) and inspection of manufactured components to be used for permanent nuclear waste disposal. This paper will provide an overview of four research studies, present selected experimental results and suggest ways for optimization of the NDT process, procedures, and training. The experimental results have shown that time pressure and mental workload negatively influence the quality of the manual inspection performance. Noticeable were influences of the organization of the working schedule, communication, procedures, supervision, and demonstration task. Customized Failure Mode and Effects Analysis (FMEA) was used to identify potential human risks, arising during acquisition and evaluation of NDT data. Several preventive measures were suggested and furthermore discussed, with respect to problems that could arise from their application. Experimental results show that implementing human redundancy in critical tasks, such as defect identification, as well as using an automated aid (software) to help operators in decision making about the existence and size of defects, could lead to other kinds of problems, namely social loafing and automation bias that might affect the reliability of NDT in an undesired manner. Shifting focus from the operator, as the main source of errors, to the organization, as the underlying source, is a recommended approach to ensure safety. (orig.) [de

  15. Application of reliability analysis methods to the comparison of two safety circuits

    International Nuclear Information System (INIS)

    Signoret, J.-P.

    1975-01-01

    Two circuits of different design, intended for assuming the ''Low Pressure Safety Injection'' function in PWR reactors are analyzed using reliability methods. The reliability analysis of these circuits allows the failure trees to be established and the failure probability derived. The dependence of these results on test use and maintenance is emphasized as well as critical paths. The great number of results obtained may allow a well-informed choice taking account of the reliability wanted for the type of circuits [fr

  16. Application of Metric-based Software Reliability Analysis to Example Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Smidts, Carol

    2008-07-01

    The software reliability of TELLERFAST ATM software is analyzed by using two metric-based software reliability analysis methods, a state transition diagram-based method and a test coverage-based method. The procedures for the software reliability analysis by using the two methods and the analysis results are provided in this report. It is found that the two methods have a relation of complementary cooperation, and therefore further researches on combining the two methods to reflect the benefit of the complementary cooperative effect to the software reliability analysis are recommended

  17. Piping reliability model development, validation and its applications to light water reactor piping

    International Nuclear Information System (INIS)

    Woo, H.H.

    1983-01-01

    A brief description is provided of a three-year effort undertaken by the Lawrence Livermore National Laboratory for the piping reliability project. The ultimate goal of this project is to provide guidance for nuclear piping design so that high-reliability piping systems can be built. Based on the results studied so far, it is concluded that the reliability approach can undoubtedly help in understanding not only how to assess and improve the safety of the piping systems but also how to design more reliable piping systems

  18. NDE performance demonstration in the US nuclear power industry - applications, costs, lessons learned, and connection to NDE reliability

    International Nuclear Information System (INIS)

    Ammirato, F.

    1997-01-01

    Periodic inservice inspection (ISI) of nuclear power plant components is performed in the United States to satisfy legal commitments and to provide plant owners with reliable information for managing degradation. Performance demonstration provides credible evidence that ISI will fulfill its objectives. This paper examines the technical requirements for inspection and discusses how these technical needs are used to develop effective performance demonstration applications. NDE reliability is discussed with particular reference to its role in structural integrity assessments and its connection with performance demonstration. It is shown that the role of NDE reliability can range from very small to critical depending on the particular application and must be considered carefully in design of inspection techniques and performance demonstration programs used to qualify the inspection. Finally, the costs, benefits, and problems associated with performance demonstration are reviewed along with lessons learned from more than 15 years of performance demonstration experience in the US. (orig.)

  19. Reliability and Lifetime Prediction of Remote Phosphor Plates in Solid-State Lighting Applications Using Accelerated Degradation Testing

    NARCIS (Netherlands)

    Yazdan Mehr, M.; van Driel, W.D.; Zhang, G.Q.

    2015-01-01

    A methodology, based on accelerated degradation testing, is developed to predict the lifetime of remote phosphor plates used in solid-state lighting (SSL) applications. Both thermal stress and light intensity are used to accelerate degradation reaction in remote phosphor plates. A reliability model,

  20. Application of Reliability Analysis for Optimal Design of Monolithic Vertical Wall Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Sørensen, John Dalsgaard; Christiani, E.

    1995-01-01

    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of some of the most important failure modes are described. The failures are sliding and slip surface failure of a rubble mound and a clay foundation. Relevant design...

  1. Derating design for optimizing reliability and cost with an application to liquid rocket engines

    International Nuclear Information System (INIS)

    Kim, Kyungmee O.; Roh, Taeseong; Lee, Jae-Woo; Zuo, Ming J.

    2016-01-01

    Derating is the operation of an item at a stress that is lower than its rated design value. Previous research has indicated that reliability can be increased from operational derating. In order to derate an item in field operation, however, an engineer must rate the design of the item at a stress level higher than the operational stress level, which increases the item's nominal failure rate and development costs. At present, there is no model available to quantify the cost and reliability that considers the design uprating as well as the operational derating. In this paper, we establish the reliability expression in terms of the derating level assuming that the nominal failure rate is constant with time for a fixed rated design value. The total development cost is expressed in terms of the rated design value and the number of tests necessary to demonstrate the reliability requirement. The properties of the optimal derating level are explained for maximizing the reliability or for minimizing the cost. As an example, the proposed model is applied to the design of liquid rocket engines. - Highlights: • Modeled the effect of derating design on the reliability and the development cost. • Discovered that derating design may reduce the cost of reliability demonstration test. • Optimized the derating design parameter for reliability maximization or cost minimization.

  2. Nuclear power generating station operability assurance reliability, availability, and maintainability application for maintenance management

    International Nuclear Information System (INIS)

    Cleveland, J.W.; Regenie, T.R.; Wilson, R.J.

    1985-01-01

    Environmental qualification and equipment warrantee insurance stipulations should be supplemented with a reliable maintainability program structured to identify and control fast failing subcomponents within critical equipment. Anticipation of equipment subcomponent failures can control unnecessary plant off-line occurrences. Incorporation of reliability, availability, and maintainability considerations into plant maintenance policies on power generation and safety related items have positive cost benefit advantages

  3. An overview of the reliability prediction related aspects of high power IGBTs in wind power applications

    DEFF Research Database (Denmark)

    Busca, Christian; Teodorescu, Remus; Blaabjerg, Frede

    2011-01-01

    Reliability is becoming more and more important as the size and number of installed Wind Turbines (WTs) increases. Very high reliability is especially important for offshore WTs because the maintenance and repair of such WTs in case of failures can be very expensive. WT manufacturers need...

  4. Reliability of high mobility SiGe channel MOSFETs for future CMOS applications

    CERN Document Server

    Franco, Jacopo; Groeseneken, Guido

    2014-01-01

    Due to the ever increasing electric fields in scaled CMOS devices, reliability is becoming a showstopper for further scaled technology nodes. Although several groups have already demonstrated functional Si channel devices with aggressively scaled Equivalent Oxide Thickness (EOT) down to 5Å, a 10 year reliable device operation cannot be guaranteed anymore due to severe Negative Bias Temperature Instability. This book focuses on the reliability of the novel (Si)Ge channel quantum well pMOSFET technology. This technology is being considered for possible implementation in next CMOS technology nodes, thanks to its benefit in terms of carrier mobility and device threshold voltage tuning. We observe that it also opens a degree of freedom for device reliability optimization. By properly tuning the device gate stack, sufficiently reliable ultra-thin EOT devices with a 10 years lifetime at operating conditions are demonstrated. The extensive experimental datasets collected on a variety of processed 300mm wafers and pr...

  5. A Study on the Joint Reliability Importance with Applications to the Maintenance Policy

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jung Sik; Kwon, Hong Je; Song, Mi Ja; Kim, Woong Kil; Yoong, Do Hwa [Seoul National Polytechnic University, Seoul (Korea, Republic of); Moon, Sin Myung; Cho, Bong Je; Moon, Jae Phil; Koo, Hoon Young; Lee, Jin Seung [Seoul National University, Seoul (Korea, Republic of)

    1997-09-01

    The objective of this project is to investigate the possibility of applying the Joint Reliability Importance(JRI) of two components to the establishment of system maintenance policy. Components are classified into reliability substitutes and reliability compliments. If the sign of JRI of two components is positive, they are called as reliability compliments. If the sign of JRI of two components is negative, they are called as reliability substitutes. In case of reliability compliments, one component becomes more important as the other one works and in case of reliability substitutes, one component becomes more important as the other one fails. Therefore, when the preventive maintenance is carried out, two components which are reliability substitutes should not be maintained at the same time. Also, when the corrective maintenance is carried out, we not only repair the failed components but pay attention to the functioning components which are reliability substitutes with respect to the failed components. The sign of JRI of any two components in series (parallel) system is positive (negative). Then, what is the sign of any two components in k-out-of-n:G system? This project presents an idea of characterizing the k-out-of-n:G system by calculating the JRI of two components in that system, assuming that reliability of all components are equal. In addition to the JRI of two components, JRI of two gates is introduced in this project. The algorithm to compute the JRI of two gates is presented. Bridge system is considered and the co-relation of two min cut sets is illustrated by using the cut-set representation of bridge system and calculating the JRI of two gates. 28 refs., 20 tabs., 32 figs. (author)

  6. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  7. effect of uncertainty on the fatigue reliability of reinforced concrete ...

    African Journals Online (AJOL)

    In this paper, a reliability time-variant fatigue analysis and uncertainty effect on the serviceability of reinforced concrete bridge deck was carried out. A simply supported 15m bridge deck was specifically used for the investigation. Mathematical models were developed and the uncertainties in structural resistance, applied ...

  8. Improved Reliability-Based Optimization with Support Vector Machines and Its Application in Aircraft Wing Design

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2015-01-01

    Full Text Available A new reliability-based design optimization (RBDO method based on support vector machines (SVM and the Most Probable Point (MPP is proposed in this work. SVM is used to create a surrogate model of the limit-state function at the MPP with the gradient information in the reliability analysis. This guarantees that the surrogate model not only passes through the MPP but also is tangent to the limit-state function at the MPP. Then, importance sampling (IS is used to calculate the probability of failure based on the surrogate model. This treatment significantly improves the accuracy of reliability analysis. For RBDO, the Sequential Optimization and Reliability Assessment (SORA is employed as well, which decouples deterministic optimization from the reliability analysis. The improved SVM-based reliability analysis is used to amend the error from linear approximation for limit-state function in SORA. A mathematical example and a simplified aircraft wing design demonstrate that the improved SVM-based reliability analysis is more accurate than FORM and needs less training points than the Monte Carlo simulation and that the proposed optimization strategy is efficient.

  9. Quantifying the role of vegetation in controlling the time-variant age of evapotranspiration, soil water and stream flow

    Science.gov (United States)

    Smith, A.; Tetzlaff, D.; Soulsby, C.

    2017-12-01

    Identifying the sources of water which sustain plant water uptake is an essential prerequisite to understanding the interactions of vegetation and water within the critical zone. Estimating the sources of root-water uptake is complicated by ecohydrological separation, or the notion of "two-water worlds" which distinguishes more mobile and immobile water sources which respectively sustain streamflow and evapotranspiration. Water mobility within the soil determines both the transit time/residence time of water through/in soils and the subsequent age of root-uptake and xylem water. We used time-variant StorAge Selection (SAS) functions to conceptualise the transit/residence times in the critical zone using a dual-storage soil column differentiating gravity (mobile) and tension dependent (immobile) water, calibrated to measured stable isotope signatures of soil water. Storage-discharge relationships [Brutsaert and Nieber, 1977] were used to identify gravity and tension dependent storages. A temporally variable distribution for root water uptake was identified using simulated stable isotopes in xylem and soil water. Composition of δ2H and δ18O was measured in soil water at 4 depths (5, 10, 15, and 20 cm) on 10 occasions, and 5 times for xylem water within the dominant heather (Calluna sp. and Erica sp.) vegetation in a Scottish Highland catchment over a two-year period. Within a 50 cm soil column, we found that more than 53% of the total stored water was water that was present before the start of the simulation. Mean residence times of the mobile water in the upper 20 cm of the soil were 16, 25, 36, and 44 days, respectively. Mean evaporation transit time varied between 9 and 40 days, driven by seasonal changes and precipitation events. Lastly, mean transit times of xylem water ranged between 95-205 days, driven by changes in soil moisture. During low soil moisture (i.e. lower than mean soil moisture), root-uptake was from lower depths, while higher than mean soil

  10. Application of fault tree analysis for customer reliability assessment of a distribution power system

    International Nuclear Information System (INIS)

    Abdul Rahman, Fariz; Varuttamaseni, Athi; Kintner-Meyer, Michael; Lee, John C.

    2013-01-01

    A new method is developed for predicting customer reliability of a distribution power system using the fault tree approach with customer weighted values of component failure frequencies and downtimes. Conventional customer reliability prediction of the electric grid employs the system average (SA) component failure frequency and downtime that are weighted by only the quantity of the components in the system. These SA parameters are then used to calculate the reliability and availability of components in the system, and eventually to find the effect on customer reliability. Although this approach is intuitive, information is lost regarding customer disturbance experiences when customer information is not utilized in the SA parameter calculations, contributing to inaccuracies when predicting customer reliability indices in our study. Hence our new approach directly incorporates customer disturbance information in component failure frequency and downtime calculations by weighting these parameters with information of customer interruptions. This customer weighted (CW) approach significantly improves the prediction of customer reliability indices when applied to our reliability model with fault tree and two-state Markov chain formulations. Our method has been successfully applied to an actual distribution power system that serves over 2.1 million customers. Our results show an improved benchmarking performance on the system average interruption frequency index (SAIFI) by 26% between the SA-based and CW-based reliability calculations. - Highlights: ► We model the reliability of a power system with fault tree and two-state Markov chain. ► We propose using customer weighted component failure frequencies and downtimes. ► Results show customer weighted values perform superior to component average values. ► This method successfully incorporates customer disturbance information into the model.

  11. An application of the fault tree analysis for the power system reliability estimation

    International Nuclear Information System (INIS)

    Volkanovski, A.; Cepin, M.; Mavko, B.

    2007-01-01

    The power system is a complex system with its main function to produce, transfer and provide consumers with electrical energy. Combinations of failures of components in the system can result in a failure of power delivery to certain load points and in some cases in a full blackout of power system. The power system reliability directly affects safe and reliable operation of nuclear power plants because the loss of offsite power is a significant contributor to the core damage frequency in probabilistic safety assessments of nuclear power plants. The method, which is based on the integration of the fault tree analysis with the analysis of the power flows in the power system, was developed and implemented for power system reliability assessment. The main contributors to the power system reliability are identified, both quantitatively and qualitatively. (author)

  12. Application-Driven Reliability Measures and Evaluation Tool for Fault-Tolerant Real-Time Systems

    National Research Council Canada - National Science Library

    Krishna, C

    2001-01-01

    .... The measure combines graphic-theoretic concepts in evaluating the underlying reliability of the network and other means to evaluate the ability of the network to support interprocessor traffic...

  13. Durability and Reliability of Large Diameter HDPE Pipe for Water Main Applications (Web Report 4485)

    Science.gov (United States)

    Research validates HDPE as a suitable material for use in municipal piping systems, and more research may help users maximize their understanding of its durability and reliability. Overall, corrosion resistance, hydraulic efficiency, flexibility, abrasion resistance, toughness, f...

  14. Application of GO methodology in reliability analysis of offsite power supply of Daya Bay NPP

    International Nuclear Information System (INIS)

    Shen Zupei; Li Xiaodong; Huang Xiangrui

    2003-01-01

    The author applies the GO methodology to reliability analysis of the offsite power supply system of Daya Bay NPP. The direct quantitative calculation formulas of the stable reliability target of the system with shared signals and the dynamic calculation formulas of the state probability for the unit with two states are derived. The method to solve the fault event sets of the system is also presented and all the fault event sets of the outer power supply system and their failure probability are obtained. The resumption reliability of the offsite power supply system after the stability failure of the power net is also calculated. The result shows that the GO methodology is very simple and useful in the stable and dynamic reliability analysis of the repairable system

  15. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  16. Advances in ranking and selection, multiple comparisons, and reliability methodology and applications

    CERN Document Server

    Balakrishnan, N; Nagaraja, HN

    2007-01-01

    S. Panchapakesan has made significant contributions to ranking and selection and has published in many other areas of statistics, including order statistics, reliability theory, stochastic inequalities, and inference. Written in his honor, the twenty invited articles in this volume reflect recent advances in these areas and form a tribute to Panchapakesan's influence and impact on these areas. Thematically organized, the chapters cover a broad range of topics from: Inference; Ranking and Selection; Multiple Comparisons and Tests; Agreement Assessment; Reliability; and Biostatistics. Featuring

  17. Reliability consideration of low-power-grid-tied inverter for photovoltaic application

    OpenAIRE

    Liu, J.; Henze, N.

    2009-01-01

    In recent years PV modules have been improved evidently. An excellent reliability has been validated corresponding to Mean Time between Failure (MTBF) between 500 and 6000 years respectively in commercial utility power systems. Manufactures can provide performance guarantees for PV modules at least for 20 years. If an average inverter lifetime of 5 years is assumed, it is evident that the overall reliability of PV systems [PVSs] with integrated inverter is determined chiefly by the inverter i...

  18. [A reliability growth assessment method and its application in the development of equipment in space cabin].

    Science.gov (United States)

    Chen, J D; Sun, H L

    1999-04-01

    Objective. To assess and predict reliability of an equipment dynamically by making full use of various test informations in the development of products. Method. A new reliability growth assessment method based on army material system analysis activity (AMSAA) model was developed. The method is composed of the AMSAA model and test data conversion technology. Result. The assessment and prediction results of a space-borne equipment conform to its expectations. Conclusion. It is suggested that this method should be further researched and popularized.

  19. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  20. LIF: A new Kriging based learning function and its application to structural reliability analysis

    International Nuclear Information System (INIS)

    Sun, Zhili; Wang, Jian; Li, Rui; Tong, Cao

    2017-01-01

    The main task of structural reliability analysis is to estimate failure probability of a studied structure taking randomness of input variables into account. To consider structural behavior practically, numerical models become more and more complicated and time-consuming, which increases the difficulty of reliability analysis. Therefore, sequential strategies of design of experiment (DoE) are raised. In this research, a new learning function, named least improvement function (LIF), is proposed to update DoE of Kriging based reliability analysis method. LIF values how much the accuracy of estimated failure probability will be improved if adding a given point into DoE. It takes both statistical information provided by the Kriging model and the joint probability density function of input variables into account, which is the most important difference from the existing learning functions. Maximum point of LIF is approximately determined with Markov Chain Monte Carlo(MCMC) simulation. A new reliability analysis method is developed based on the Kriging model, in which LIF, MCMC and Monte Carlo(MC) simulation are employed. Three examples are analyzed. Results show that LIF and the new method proposed in this research are very efficient when dealing with nonlinear performance function, small probability, complicated limit state and engineering problems with high dimension. - Highlights: • Least improvement function (LIF) is proposed for structural reliability analysis. • LIF takes both Kriging based statistical information and joint PDF into account. • A reliability analysis method is constructed based on Kriging, MCS and LIF.

  1. Application of objective clinical human reliability analysis (OCHRA) in assessment of technical performance in laparoscopic rectal cancer surgery.

    Science.gov (United States)

    Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K

    2016-06-01

    Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p technical performance of laparoscopic rectal surgery.

  2. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  3. Assessments and applications to enhance human reliability and reduce risk during less-than-full-power operations

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Singh, A.

    1992-01-01

    Study of events, interviews with plant personnel, and applications of risk studies indicate that the risk of a potential accident during less-than-full-power (LTFP) operation is becoming a greater fraction of the risk as improvements are made to the full-power operations. Industry efforts have been increased to reduce risk and the cost of shutdown operations. These efforts consider the development and application of advanced tools to help utilities proactively identify issues and develop contingencies and interventions to enhance reliability and reduce risk of low-power operations at nuclear power plants. The role for human reliability assessments is to help improve utility outage planning to better achieve schedule and risk control objectives. Improvements are expected to include intervention tools to identify and reduce human error, definition of new instructional modules, and prioritization of risk reduction issues for operators. The Electric Power Research Institute is sponsoring a project to address the identification and quantification of factors that affect human reliability during LTFP operation of nuclear power plants. The results of this project are expected to promote the development of proactively applied interventions and contingencies for enhanced human reliability during shutdown operations

  4. APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS

    Science.gov (United States)

    Mehran, Babak; Nakamura, Hideki

    Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.

  5. Reliability estimation system: its application to the nuclear geophysical sampling of ore deposits

    International Nuclear Information System (INIS)

    Khaykovich, I.M.; Savosin, S.I.

    1992-01-01

    The reliability estimation system accepted in the Soviet Union for sampling data in nuclear geophysics is based on unique requirements in metrology and methodology. It involves estimating characteristic errors in calibration, as well as errors in measurement and interpretation. This paper describes the methods of estimating the levels of systematic and random errors at each stage of the problem. The data of nuclear geophysics sampling are considered to be reliable if there are no statistically significant, systematic differences between ore intervals determined by this method and by geological control, or by other methods of sampling; the reliability of the latter having been verified. The difference between the random errors is statistically insignificant. The system allows one to obtain information on the parameters of ore intervals with a guaranteed random error and without systematic errors. (Author)

  6. Reliability and validity of a simple and clinically applicable pain stimulus

    DEFF Research Database (Denmark)

    O'Neill, Søren; Graven-Nielsen, Thomas; Manniche, Claus

    2014-01-01

    and after conditioned pain modulation by cold-pressor test (CPT). Correlation to pressure pain threshold (PPT) of the infraspinatus muscle and cold-pressor test pain intensity, time to pain onset and time to non-tolerance, was examined. Test/re-test reliability of clamp pain was also assessed...... and the stimulus-response relationship was examined with a set of 6 different clamps.Conclusions: Clamp pain was sensitive to changes in pain sensitivity provoked by conditioned pain modulation (CPM). Test/re-test reliability of the spring-clamp pain was better for healthy volunteers over a period of days, than...

  7. Study and application of human reliability analysis for digital human-system interface

    International Nuclear Information System (INIS)

    Jia Ming; Liu Yanzi; Zhang Jianbo

    2014-01-01

    The knowledge of human-orientated abilities and limitations could be used to digital human-system interface (HSI) design by human reliability analysis (HRA) technology. Further, control room system design could achieve the perfect match of man-machine-environment. This research was conducted to establish an integrated HRA method. This method identified HSI potential design flaws which may affect human performance and cause human error. Then a systematic approach was adopted to optimize HSI. It turns out that this method is practical and objective, and effectively improves the safety, reliability and economy of nuclear power plant. This method was applied to CRP1000 projects under construction successfully with great potential. (authors)

  8. Safe and reliable solutions for Internet application in power sector. SAT automation

    International Nuclear Information System (INIS)

    Eichelburg, W. K.

    2004-01-01

    The requirements for communication of various information systems (control systems, EMS, ERP) continually increase. Internet is prevailingly a Universal communication device for interconnection of the distant systems at the present. However, the communication with the outside world is important, the internal system must be protected safely and reliably. The goal of the article is to inform the experienced participants with the verified solutions of the safe and reliable Internet utilization for interconnection of control systems on the superior level, the distant management, the diagnostic and for interconnection of information systems. An added value is represented by the solutions using Internet for image and sound transmission. (author)

  9. Probability of extreme interference levels computed from reliability approaches: application to transmission lines with uncertain parameters

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)

  10. Applicability of the “Gallet equation” to the vegetation clearances of NERC Reliability Standard FAC-003-2

    Energy Technology Data Exchange (ETDEWEB)

    Kirkham, Harold

    2012-03-31

    NERC has proposed a standard to use to specify clearances between vegetation and power lines. The purpose of the rule is to reduce the probability of flashover to a calculably low level. This report was commissioned by FERC’s Office of Electrical Reliability. The scope of the study was analysis of the mathematics and documentation of the technical justification behind the application of the Gallet equation and the assumptions used in the technical reference paper

  11. Enhancing thermal reliability of fiber-optic sensors for bio-inspired applications at ultra-high temperatures

    International Nuclear Information System (INIS)

    Kang, Donghoon; Kim, Heon-Young; Kim, Dae-Hyun

    2014-01-01

    The rapid growth of bio-(inspired) sensors has led to an improvement in modern healthcare and human–robot systems in recent years. Higher levels of reliability and better flexibility, essential features of these sensors, are very much required in many application fields (e.g. applications at ultra-high temperatures). Fiber-optic sensors, and fiber Bragg grating (FBG) sensors in particular, are being widely studied as suitable sensors for improved structural health monitoring (SHM) due to their many merits. To enhance the thermal reliability of FBG sensors, thermal sensitivity, generally expressed as α f + ξ f and considered a constant, should be investigated more precisely. For this purpose, the governing equation of FBG sensors is modified using differential derivatives between the wavelength shift and the temperature change in this study. Through a thermal test ranging from RT to 900 °C, the thermal sensitivity of FBG sensors is successfully examined and this guarantees thermal reliability of FBG sensors at ultra-high temperatures. In detail, α f + ξ f has a non-linear dependence on temperature and varies from 6.0 × 10 −6  °C −1 (20 °C) to 10.6 × 10 −6  °C −1 (650 °C). Also, FBGs should be carefully used for applications at ultra-high temperatures due to signal disappearance near 900 °C. (paper)

  12. Enhancing thermal reliability of fiber-optic sensors for bio-inspired applications at ultra-high temperatures

    Science.gov (United States)

    Kang, Donghoon; Kim, Heon-Young; Kim, Dae-Hyun

    2014-07-01

    The rapid growth of bio-(inspired) sensors has led to an improvement in modern healthcare and human-robot systems in recent years. Higher levels of reliability and better flexibility, essential features of these sensors, are very much required in many application fields (e.g. applications at ultra-high temperatures). Fiber-optic sensors, and fiber Bragg grating (FBG) sensors in particular, are being widely studied as suitable sensors for improved structural health monitoring (SHM) due to their many merits. To enhance the thermal reliability of FBG sensors, thermal sensitivity, generally expressed as αf + ξf and considered a constant, should be investigated more precisely. For this purpose, the governing equation of FBG sensors is modified using differential derivatives between the wavelength shift and the temperature change in this study. Through a thermal test ranging from RT to 900 °C, the thermal sensitivity of FBG sensors is successfully examined and this guarantees thermal reliability of FBG sensors at ultra-high temperatures. In detail, αf + ξf has a non-linear dependence on temperature and varies from 6.0 × 10-6 °C-1 (20 °C) to 10.6 × 10-6 °C-1 (650 °C). Also, FBGs should be carefully used for applications at ultra-high temperatures due to signal disappearance near 900 °C.

  13. Reliability Analysis of an Extended Shock Model and Its Optimization Application in a Production Line

    Directory of Open Access Journals (Sweden)

    Renbin Liu

    2014-01-01

    some important reliability indices are derived, such as availability, failure frequency, mean vacation period, mean renewal cycle, mean startup period, and replacement frequency. Finally, a production line controlled by two cold-standby computers is modeled to present numerical illustration and its optimal part-time job policy at a maximum profit.

  14. Application of statistics to VLSI circuit manufacturing : test, diagnosis, and reliability

    NARCIS (Netherlands)

    Krishnan, Shaji

    2017-01-01

    Semiconductor product manufacturing companies strive to deliver defect free, and reliable products to their customers. However, with the down-scaling of technology, increasing the throughput at every stage of semiconductor product manufacturing becomes a harder challenge. To avoid process-related

  15. Reliability considerations of a fuel cell backup power system for telecom applications

    Science.gov (United States)

    Serincan, Mustafa Fazil

    2016-03-01

    A commercial fuel cell backup power unit is tested in real life operating conditions at a base station of a Turkish telecom operator. The fuel cell system responds to 256 of 260 electric power outages successfully, providing the required power to the base station. Reliability of the fuel cell backup power unit is found to be 98.5% at the system level. On the other hand, a qualitative reliability analysis at the component level is carried out. Implications of the power management algorithm on reliability is discussed. Moreover, integration of the backup power unit to the base station ecosystem is reviewed in the context of reliability. Impact of inverter design on the stability of the output power is outlined. Significant current harmonics are encountered when a generic inverter is used. However, ripples are attenuated significantly when a custom design inverter is used. Further, fault conditions are considered for real world case studies such as running out of hydrogen, a malfunction in the system, or an unprecedented operating scheme. Some design guidelines are suggested for hybridization of the backup power unit for an uninterrupted operation.

  16. New application of dynamic reliability assessment of the mid-loop operation

    International Nuclear Information System (INIS)

    Moosung, Jae; Goon Cherl Park; Chang Hyun Chung

    1995-01-01

    This paper presents a new approach for assessing the dynamic reliability in a complex system such as a nuclear power plant. The method is applied to a dynamic analysis of the potential accident sequences that may occur during mid-loop operation

  17. Auditing data reliability in international logistics : An application of bayesian networks

    NARCIS (Netherlands)

    Liu, L.; Daniels, H.A.M.; Triepels, R.J.M.A.; Hammoudi, S.; Maciaszek, L.; Cordeiro, J.

    2014-01-01

    Data reliability closely relates to the risk management in international logistics. Unreliable data negatively affect the business in various ways. Due to the competence specialization and cooperation among the business partners in a logistics chain, the business in a focal company is inevitably

  18. Different Approaches for Ensuring Performance/Reliability of Plastic Encapsulated Microcircuits (PEMs) in Space Applications

    Science.gov (United States)

    Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.

    2000-01-01

    Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.

  19. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    Science.gov (United States)

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  20. The reliability, validity, and applicability of an English language version of the Mini-ICF-APP.

    Science.gov (United States)

    Molodynski, Andrew; Linden, Michael; Juckel, George; Yeeles, Ksenija; Anderson, Catriona; Vazquez-Montes, Maria; Burns, Tom

    2013-08-01

    This study aimed at establishing the validity and reliability of an English language version of the Mini-ICF-APP. One hundred and five patients under the care of secondary mental health care services were assessed using the Mini-ICF-APP and several well-established measures of functioning and symptom severity. 47 (45 %) patients were interviewed on two occasions to ascertain test-retest reliability and 50 (48 %) were interviewed by two researchers simultaneously to determine the instrument's inter-rater reliability. Occupational and sick leave status were also recorded to assess construct validity. The Mini-ICF-APP was found to have substantial internal consistency (Chronbach's α 0.869-0.912) and all 13 items correlated highly with the total score. Analysis also showed that the Mini-ICF-APP had good test-retest (ICC 0.832) and inter-rater (ICC 0.886) reliability. No statistically significant association with length of sick leave was found, but the unemployed scored higher on the Mini ICF-APP than those in employment (mean 18.4, SD 9.1 vs. 9.4, SD 6.4, p Mini-ICF-APP correlated highly with the other measures of illness severity and functioning considered in the study. The English version of the Mini-ICF-APP is a reliable and valid measure of disorders of capacity as defined by the International Classification of Functioning. Further work is necessary to establish whether the scale could be divided into sub scales which would allow the instrument to more sensitively measure an individual's specific impairments.

  1. Reliable real-time applications - and how to use tests to model and understand

    DEFF Research Database (Denmark)

    Jensen, Peter Krogsgaard

    Test and analysis of real-time applications, where temporal properties are inspected, analyzed, and verified in a model developed from timed traces originating from measured test result on a running application......Test and analysis of real-time applications, where temporal properties are inspected, analyzed, and verified in a model developed from timed traces originating from measured test result on a running application...

  2. High reliability - low noise radionuclide signature identification algorithms for border security applications

    Science.gov (United States)

    Lee, Sangkyu

    Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection

  3. A Stochastic Reliability Model for Application in a Multidisciplinary Optimization of a Low Pressure Turbine Blade Made of Titanium Aluminide

    Directory of Open Access Journals (Sweden)

    Christian Dresbach

    Full Text Available Abstract Currently, there are a lot of research activities dealing with gamma titanium aluminide (γ-TiAl alloys as new materials for low pressure turbine (LPT blades. Even though the scatter in mechanical properties of such intermetallic alloys is more distinctive as in conventional metallic alloys, stochastic investigations on γ -TiAl alloys are very rare. For this reason, we analyzed the scatter in static and dynamic mechanical properties of the cast alloy Ti-48Al-2Cr-2Nb. It was found that this alloy shows a size effect in strength which is less pronounced than the size effect of brittle materials. A weakest-link approach is enhanced for describing a scalable size effect under multiaxial stress states and implemented in a post processing tool for reliability analysis of real components. The presented approach is a first applicable reliability model for semi-brittle materials. The developed reliability tool was integrated into a multidisciplinary optimization of the geometry of a LPT blade. Some processes of the optimization were distributed in a wide area network, so that specialized tools for each discipline could be employed. The optimization results show that it is possible to increase the aerodynamic efficiency and the structural mechanics reliability at the same time, while ensuring the blade can be manufactured in an investment casting process.

  4. Reliability and effectiveness of early warning systems for natural hazards: Concept and application to debris flow warning

    International Nuclear Information System (INIS)

    Sättele, Martina; Bründl, Michael; Straub, Daniel

    2015-01-01

    Early Warning Systems (EWS) are increasingly applied to mitigate the risks posed by natural hazards. To compare the effect of EWS with alternative risk reduction measures and to optimize their design and operation, their reliability and effectiveness must be quantified. In the present contribution, a framework approach to the evaluation of threshold-based EWS for natural hazards is presented. The system reliability is classically represented by the Probability of Detection (POD) and Probability of False Alarms (PFA). We demonstrate how the EWS effectiveness, which is a measure of risk reduction, can be formulated as a function of POD and PFA. To model the EWS and compute the reliability, we develop a framework based on Bayesian Networks, which is further extended to a decision graph, facilitating the optimization of the warning system. In a case study, the framework is applied to the assessment of an existing debris flow EWS. The application demonstrates the potential of the framework for identifying the important factors influencing the effectiveness of the EWS and determining optimal warning strategies and system configurations. - Highlights: • Warning systems are increasingly applied measures to reduce natural hazard risks. • Bayesian Networks (BN) are powerful tools to quantify warning system's reliability. • The effectiveness is defined to assess the optimality of warning systems. • By extending BNs to decision graphs, the optimal warning strategy is identified. • Sensors positioning significantly influence the effectiveness of warning systems

  5. Improved Reliability of SiC Pressure Sensors for Long Term High Temperature Applications

    Science.gov (United States)

    Okojie, R. S.; Nguyen, V.; Savrun, E.; Lukco, D.

    2011-01-01

    We report advancement in the reliability of silicon carbide pressure sensors operating at 600 C for extended periods. The large temporal drifts in zero pressure offset voltage at 600 C observed previously were significantly suppressed to allow improved reliable operation. This improvement was the result of further enhancement of the electrical and mechanical integrity of the bondpad/contact metallization, and the introduction of studded bump bonding on the pad. The stud bump contact promoted strong adhesion between the Au bond pad and the Au die-attach. The changes in the zero offset voltage and bridge resistance over time at temperature were explained by the microstructure and phase changes within the contact metallization, that were analyzed with Auger electron spectroscopy (AES) and field emission scanning electron microscopy (FE-SEM).

  6. Stochastic quasi-gradient based optimization algorithms for dynamic reliability applications

    International Nuclear Information System (INIS)

    Bourgeois, F.; Labeau, P.E.

    2001-01-01

    On one hand, PSA results are increasingly used in decision making, system management and optimization of system design. On the other hand, when severe accidental transients are considered, dynamic reliability appears appropriate to account for the complex interaction between the transitions between hardware configurations, the operator behavior and the dynamic evolution of the system. This paper presents an exploratory work in which the estimation of the system unreliability in a dynamic context is coupled with an optimization algorithm to determine the 'best' safety policy. Because some reliability parameters are likely to be distributed, the cost function to be minimized turns out to be a random variable. Stochastic programming techniques are therefore envisioned to determine an optimal strategy. Monte Carlo simulation is used at all stages of the computations, from the estimation of the system unreliability to that of the stochastic quasi-gradient. The optimization algorithm is illustrated on a HNO 3 supply system

  7. Practical applications of probabilistic structural reliability analyses to primary pressure systems of nuclear power plants

    International Nuclear Information System (INIS)

    Witt, F.J.

    1980-01-01

    Primary pressure systems of nuclear power plants are built to exacting codes and standards with provisions for inservice inspection and repair if necessary. Analyses and experiments have demonstrated by deterministic means that very large margins exist on safety impacting failures under normal operating and upset conditions. Probabilistic structural reliability analyses provide additional support that failures of significance are very, very remote. They may range in degree of sophistication from very simple calculations to very complex computer analyses involving highly developed mathematical techniques. The end result however should be consistent with the desired usage. In this paper a probabilistic structural reliability analysis is performed as a supplement to in-depth deterministic evaluations with the primary objective to demonstrate an acceptably low probability of failure for the conditions considered. (author)

  8. ERP application of real-time vdc-enabled last planner system for planning reliability improvement

    DEFF Research Database (Denmark)

    Cho, S.; Sørensen, Kristian Birch; Fischer, M.

    2009-01-01

    The Last Planner System (LPS) has since its introduction in 1994 become a widely used method of AEC practitioners for improvement of planning reliability and tracking and monitoring of project progress. However, the observations presented in this paper indicate that the last planners...... and coordinators are in need of a new system that integrates the existing LPS with Virtual Design and Construction (VDC), Enterprise Resource Planning (ERP) systems, and automatic object identification by means of Radio Frequency Identification (RFID) technology. This is because current practice of the LPS...... implementations is guesswork-driven, textual report-generated, hand-updated, and even interpersonal trust-oriented, resulting in less accurate and reliable plans. This research introduces a prototype development of the VREL (VDC + RFID + ERP + LPS) integration to generate a real-time updated cost + physical...

  9. Handbook of human-reliability analysis with emphasis on nuclear power plant applications. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Swain, A D; Guttmann, H E

    1983-08-01

    The primary purpose of the Handbook is to present methods, models, and estimated human error probabilities (HEPs) to enable qualified analysts to make quantitative or qualitative assessments of occurrences of human errors in nuclear power plants (NPPs) that affect the availability or operational reliability of engineered safety features and components. The Handbook is intended to provide much of the modeling and information necessary for the performance of human reliability analysis (HRA) as a part of probabilistic risk assessment (PRA) of NPPs. Although not a design guide, a second purpose of the Handbook is to enable the user to recognize error-likely equipment design, plant policies and practices, written procedures, and other human factors problems so that improvements can be considered. The Handbook provides the methodology to identify and quantify the potential for human error in NPP tasks.

  10. Handbook of human-reliability analysis with emphasis on nuclear power plant applications. Final report

    International Nuclear Information System (INIS)

    Swain, A.D.; Guttmann, H.E.

    1983-08-01

    The primary purpose of the Handbook is to present methods, models, and estimated human error probabilities (HEPs) to enable qualified analysts to make quantitative or qualitative assessments of occurrences of human errors in nuclear power plants (NPPs) that affect the availability or operational reliability of engineered safety features and components. The Handbook is intended to provide much of the modeling and information necessary for the performance of human reliability analysis (HRA) as a part of probabilistic risk assessment (PRA) of NPPs. Although not a design guide, a second purpose of the Handbook is to enable the user to recognize error-likely equipment design, plant policies and practices, written procedures, and other human factors problems so that improvements can be considered. The Handbook provides the methodology to identify and quantify the potential for human error in NPP tasks

  11. Making literature reviews more reliable through application of lessons from systematic reviews.

    Science.gov (United States)

    Haddaway, N R; Woodcock, P; Macura, B; Collins, A

    2015-12-01

    Review articles can provide valuable summaries of the ever-increasing volume of primary research in conservation biology. Where findings may influence important resource-allocation decisions in policy or practice, there is a need for a high degree of reliability when reviewing evidence. However, traditional literature reviews are susceptible to a number of biases during the identification, selection, and synthesis of included studies (e.g., publication bias, selection bias, and vote counting). Systematic reviews, pioneered in medicine and translated into conservation in 2006, address these issues through a strict methodology that aims to maximize transparency, objectivity, and repeatability. Systematic reviews will always be the gold standard for reliable synthesis of evidence. However, traditional literature reviews remain popular and will continue to be valuable where systematic reviews are not feasible. Where traditional reviews are used, lessons can be taken from systematic reviews and applied to traditional reviews in order to increase their reliability. Certain key aspects of systematic review methods that can be used in a context-specific manner in traditional reviews include focusing on mitigating bias; increasing transparency, consistency, and objectivity, and critically appraising the evidence and avoiding vote counting. In situations where conducting a full systematic review is not feasible, the proposed approach to reviewing evidence in a more systematic way can substantially improve the reliability of review findings, providing a time- and resource-efficient means of maximizing the value of traditional reviews. These methods are aimed particularly at those conducting literature reviews where systematic review is not feasible, for example, for graduate students, single reviewers, or small organizations. © 2015 Society for Conservation Biology.

  12. Application of Cold Chain Logistics Safety Reliability in Fresh Food Distribution Optimization

    OpenAIRE

    Zou Yifeng; Xie Ruhe

    2013-01-01

    In view of the nature of fresh food’s continuous decrease of safety during distribution process, this study applied safety reliability of food cold chain logistics to establish fresh food distribution routing optimization model with time windows, and solved the model using MAX-MIN Ant System (MMAS) with case analysis. Studies have shown that the mentioned model and algorithm can better solve the problem of fresh food distribution routing optimization with time windows.

  13. How simulation of failure risk can improve structural reliability - application to pressurized components and pipes

    OpenAIRE

    Cioclov, Dimitru Dragos

    2013-01-01

    Probabilistic methods for failure risk assessment are introduced, with reference to load carrying structures, such as pressure vessels (PV) and components of pipes systems. The definition of the failure risk associated with structural integrity is made in the context of the general approach to structural reliability. Sources of risk are summarily outlined with emphasis on variability and uncertainties (V&U) which might be encountered in the analysis. To highlight the problem, in its practical...

  14. Reliability characteristics and applicability of a repeated sprint ability test in male young soccer players

    DEFF Research Database (Denmark)

    Castagna, Carlo; Francini, Lorenzo; Krustrup, Peter

    2018-01-01

    The aim of this study was to examine the usefulness and reliability characteristics of a repeated sprint ability test considering 5 line sprints of 30-m interspersed with 30-s of active recovery in non-elite outfield young male soccer players. Twenty-six (age 14.9±1.2 years, height 1.72±0.12 cm......, body mass 62.2±5.1 kg) players were tested 48 hours and 7 days apart for 5x30-m performance over 5 trials (T1-T5). Short- (T1-T2) and long-term reliability (T1-T3-T4-T5) were assessed with Intraclass Correlation Coefficient (ICC) and with typical error for measurement (TEM). Short- and long...... study revealed that the 5x30-m sprint test is a reliable field test in the short and long-term when the sum of sprint times and the best sprint performance are considered as outcome variables. Sprint performance decrements variables showed large variability across trials....

  15. Reliability Centered Maintenance (RCM) Methodology and Application to the Shutdown Cooling System for APR-1400 Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Faragalla, Mohamed M.; Emmanuel, Efenji; Alhammadi, Ibrahim; Awwal, Arigi M.; Lee, Yong Kwan [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2016-10-15

    Shutdown Cooling System (SCS) is a safety-related system that is used in conjunction with the Main Steam and Main or Auxiliary Feedwater Systems to reduce the temperature of the Reactor Coolant System (RCS) in post shutdown periods from the hot shutdown operating temperature to the refueling temperature. In this paper RCM methodology is applied to (SCS). RCM analysis is performed based on evaluation of Failure Modes Effects and Criticality Analysis (FME and CA) on the component, system and plant. The Logic Tree Analysis (LTA) is used to determine the optimum maintenance tasks. The main objectives of RCM is the safety, preserve the System function, the cost-effective maintenance of the plant components and increase the reliability and availability value. The RCM methodology is useful for improving the equipment reliability by strengthening the management of equipment condition, and leads to a significant decrease in the number of periodical maintenance, extended maintenance cycle, longer useful life of equipment, and decrease in overall maintenance cost. It also focuses on the safety of the system by assigning criticality index to the various components and further selecting maintenance activities based on the risk of failure involved. Therefore, it can be said that RCM introduces a maintenance plan designed for maximum safety in an economical manner and making the system more reliable. For the SCP, increasing the number of condition monitoring tasks will improve the availability of the SCP. It is recommended to reduce the number of periodic maintenance activities.

  16. Reliability analysis of prestressed concrete containment structures

    International Nuclear Information System (INIS)

    Jiang, J.; Zhao, Y.; Sun, J.

    1993-01-01

    The reliability analysis of prestressed concrete containment structures subjected to combinations of static and dynamic loads with consideration of uncertainties of structural and load parameters is presented. Limit state probabilities for given parameters are calculated using the procedure developed at BNL, while that with consideration of parameter uncertainties are calculated by a fast integration for time variant structural reliability. The limit state surface of the prestressed concrete containment is constructed directly incorporating the prestress. The sensitivities of the Choleskey decomposition matrix and the natural vibration character are calculated by simplified procedures. (author)

  17. Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly.

    Science.gov (United States)

    Brouillette, Robert M; Foil, Heather; Fontenot, Stephanie; Correro, Anthony; Allen, Ray; Martin, Corby K; Bruce-Keller, Annadora J; Keller, Jeffrey N

    2013-01-01

    While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST) in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, psmartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries of cognitive and motor function in the elderly is discussed.

  18. Validation and Reliability of a Smartphone Application for the International Prostate Symptom Score Questionnaire: A Randomized Repeated Measures Crossover Study

    Science.gov (United States)

    Shim, Sung Ryul; Sun, Hwa Yeon; Ko, Young Myoung; Chun, Dong-Il; Yang, Won Jae

    2014-01-01

    Background Smartphone-based assessment may be a useful diagnostic and monitoring tool for patients. There have been many attempts to create a smartphone diagnostic tool for clinical use in various medical fields but few have demonstrated scientific validity. Objective The purpose of this study was to develop a smartphone application of the International Prostate Symptom Score (IPSS) and to demonstrate its validity and reliability. Methods From June 2012 to May 2013, a total of 1581 male participants (≥40 years old), with or without lower urinary tract symptoms (LUTS), visited our urology clinic via the health improvement center at Soonchunhyang University Hospital (Republic of Korea) and were enrolled in this study. A randomized repeated measures crossover design was employed using a smartphone application of the IPSS and the conventional paper form of the IPSS. Paired t test under a hypothesis of non-inferior trial was conducted. For the reliability test, the intraclass correlation coefficient (ICC) was measured. Results The total score of the IPSS (P=.289) and each item of the IPSS (P=.157-1.000) showed no differences between the paper version and the smartphone version of the IPSS. The mild, moderate, and severe LUTS groups showed no differences between the two versions of the IPSS. A significant correlation was noted in the total group (ICC=.935, Psmartphones could participate. Conclusions The validity and reliability of the smartphone application version were comparable to the conventional paper version of the IPSS. The smartphone application of the IPSS could be an effective method for measuring lower urinary tract symptoms. PMID:24513507

  19. Validity and intra-rater reliability of an android phone application to measure cervical range-of-motion.

    Science.gov (United States)

    Quek, June; Brauer, Sandra G; Treleaven, Julia; Pua, Yong-Hao; Mentiplay, Benjamin; Clark, Ross Allan

    2014-04-17

    Concurrent validity and intra-rater reliability using a customized Android phone application to measure cervical-spine range-of-motion (ROM) has not been previously validated against a gold-standard three-dimensional motion analysis (3DMA) system. Twenty-one healthy individuals (age:31 ± 9.1 years, male:11) participated, with 16 re-examined for intra-rater reliability 1-7 days later. An Android phone was fixed on a helmet, which was then securely fastened on the participant's head. Cervical-spine ROM in flexion, extension, lateral flexion and rotation were performed in sitting with concurrent measurements obtained from both a 3DMA system and the phone.The phone demonstrated moderate to excellent (ICC = 0.53-0.98, Spearman ρ = 0.52-0.98) concurrent validity for ROM measurements in cervical flexion, extension, lateral-flexion and rotation. However, cervical rotation demonstrated both proportional and fixed bias. Excellent intra-rater reliability was demonstrated for cervical flexion, extension and lateral flexion (ICC = 0.82-0.90), but poor for right- and left-rotation (ICC = 0.05-0.33) using the phone. Possible reasons for the outcome are that flexion, extension and lateral-flexion measurements are detected by gravity-dependent accelerometers while rotation measurements are detected by the magnetometer which can be adversely affected by surrounding magnetic fields. The results of this study demonstrate that the tested Android phone application is valid and reliable to measure ROM of the cervical-spine in flexion, extension and lateral-flexion but not in rotation likely due to magnetic interference. The clinical implication of this study is that therapists should be mindful of the plane of measurement when using the Android phone to measure ROM of the cervical-spine.

  20. On the q-Weibull distribution for reliability applications: An adaptive hybrid artificial bee colony algorithm for parameter estimation

    International Nuclear Information System (INIS)

    Xu, Meng; Droguett, Enrique López; Lins, Isis Didier; Chagas Moura, Márcio das

    2017-01-01

    The q-Weibull model is based on the Tsallis non-extensive entropy and is able to model various behaviors of the hazard rate function, including bathtub curves, by using a single set of parameters. Despite its flexibility, the q-Weibull has not been widely used in reliability applications partly because of the complicated parameters estimation. In this work, the parameters of the q-Weibull are estimated by the maximum likelihood (ML) method. Due to the intricate system of nonlinear equations, derivative-based optimization methods may fail to converge. Thus, the heuristic optimization method of artificial bee colony (ABC) is used instead. To deal with the slow convergence of ABC, it is proposed an adaptive hybrid ABC (AHABC) algorithm that dynamically combines Nelder-Mead simplex search method with ABC for the ML estimation of the q-Weibull parameters. Interval estimates for the q-Weibull parameters, including confidence intervals based on the ML asymptotic theory and on bootstrap methods, are also developed. The AHABC is validated via numerical experiments involving the q-Weibull ML for reliability applications and results show that it produces faster and more accurate convergence when compared to ABC and similar approaches. The estimation procedure is applied to real reliability failure data characterized by a bathtub-shaped hazard rate. - Highlights: • Development of an Adaptive Hybrid ABC (AHABC) algorithm for q-Weibull distribution. • AHABC combines local Nelder-Mead simplex method with ABC to enhance local search. • AHABC efficiently finds the optimal solution for the q-Weibull ML problem. • AHABC outperforms ABC and self-adaptive hybrid ABC in accuracy and convergence speed. • Useful model for reliability data with non-monotonic hazard rate.

  1. Mechanical Integrity Issues at MCM-Cs for High Reliability Applications

    International Nuclear Information System (INIS)

    Morgenstern, H.A.; Tarbutton, T.J.; Becka, G.A.; Uribe, F.; Monroe, S.; Burchett, S.

    1998-01-01

    During the qualification of a new high reliability low-temperature cofired ceramic (LTCC) multichip module (MCM), two issues relating to the electrical and mechanical integrity of the LTCC network were encountered while performing qualification testing. One was electrical opens after aging tests that were caused by cracks in the solder joints. The other was fracturing of the LTCC networks during mechanical testing. Through failure analysis, computer modeling, bend testing, and test samples, changes were identified. Upon implementation of all these changes, the modules passed testing, and the MCM was placed into production

  2. Application of Artificial Intelligence technology to the analysis and synthesis of reliable software systems

    Science.gov (United States)

    Wild, Christian; Eckhardt, Dave

    1987-01-01

    The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.

  3. The application of two recently developed human reliability techniques to cognitive error analysis

    International Nuclear Information System (INIS)

    Gall, W.

    1990-01-01

    Cognitive error can lead to catastrophic consequences for manned systems, including those whose design renders them immune to the effects of physical slips made by operators. Four such events, pressurized water and boiling water reactor accidents which occurred recently, were analysed. The analysis identifies the factors which contributed to the errors and suggests practical strategies for error recovery or prevention. Two types of analysis were conducted: an unstructured analysis based on the analyst's knowledge of psychological theory, and a structured analysis using two recently-developed human reliability analysis techniques. In general, the structured techniques required less effort to produce results and these were comparable to those of the unstructured analysis. (author)

  4. An application of possibilistic reliability theory to a subsystem of a nuclear power plant

    International Nuclear Information System (INIS)

    Capelle, B.; Kerre, E.

    1994-01-01

    Since the late seventies, multi-state structure functions have been introduced to overcome the shortcomings of the classical and binary approach of the structural aspects of systems and their components. The uncertainty about the state of a system and its components classically is described by probability theory and possibility theory. The spread and success of these models, however, highly depends upon the development of computer tools that allow the reliability engineer to apply these new methods rather easily. In this paper, the CARA tool, which is able to represent and study the multi-state aspects of systems and their components when only incomplete information is available, is presented

  5. Application of a methodology for the development and validation of reliable process control software

    International Nuclear Information System (INIS)

    Ramamoorthy, C.V.; Mok, Y.R.; Bastani, F.B.; Chin, G.

    1980-01-01

    The necessity of a good methodology for the development of reliable software, especially with respect to the final software validation and testing activities, is discussed. A formal specification development and validation methodology is proposed. This methodology has been applied to the development and validation of a pilot software, incorporating typical features of critical software for nuclear power plants safety protection. The main features of the approach include the use of a formal specification language and the independent development of two sets of specifications. 1 ref

  6. Application of Kaplan-Meier analysis in reliability evaluation of products cast from aluminium alloys

    OpenAIRE

    J. Szymszal; A. Gierek; J. Kliś

    2010-01-01

    The article evaluates the reliability of AlSi17CuNiMg alloys using Kaplan-Meier-based technique, very popular as a survival estimation tool in medical science. The main object of survival analysis is a group (or groups) of units for which the time of occurrence of an event (failure) taking place after some time of waiting is estimated. For example, in medicine, the failure can be patient’s death. In this study, the failure was the specimen fracture during a periodical fatigue test, while the ...

  7. Reliable tool life measurements in turning - an application to cutting fluid efficiency evaluation

    DEFF Research Database (Denmark)

    Axinte, Dragos A.; Belluco, Walter; De Chiffre, Leonardo

    2001-01-01

    The paper proposes a method to obtain reliable measurements of tool life in turning, discussing some aspects related to experimental procedure and measurement accuracy. The method (i) allows and experimental determination of the extended Taylor's equation, with a limited set of experiments and (ii......) provides efficiency evaluation. Six cutting oils, five of which formulated from vegetable basestock, were evaluated in turning. Experiments were run in a range of cutting parameters. according to a 2, 3-1 factorial design, machining AISI 316L stainless steel with coated carbide tools. Tool life...

  8. Matrix-based system reliability method and applications to bridge networks

    International Nuclear Information System (INIS)

    Kang, W.-H.; Song Junho; Gardoni, Paolo

    2008-01-01

    Using a matrix-based system reliability (MSR) method, one can estimate the probabilities of complex system events by simple matrix calculations. Unlike existing system reliability methods whose complexity depends highly on that of the system event, the MSR method describes any general system event in a simple matrix form and therefore provides a more convenient way of handling the system event and estimating its probability. Even in the case where one has incomplete information on the component probabilities and/or the statistical dependence thereof, the matrix-based framework enables us to estimate the narrowest bounds on the system failure probability by linear programming. This paper presents the MSR method and applies it to a transportation network consisting of bridge structures. The seismic failure probabilities of bridges are estimated by use of the predictive fragility curves developed by a Bayesian methodology based on experimental data and existing deterministic models of the seismic capacity and demand. Using the MSR method, the probability of disconnection between each city/county and a critical facility is estimated. The probability mass function of the number of failed bridges is computed as well. In order to quantify the relative importance of bridges, the MSR method is used to compute the conditional probabilities of bridge failures given that there is at least one city disconnected from the critical facility. The bounds on the probability of disconnection are also obtained for cases with incomplete information

  9. Application of systems engineering techniques (reliability, availability, maintainability, and dollars) to the Gas Centrifuge Enrichment Plant

    International Nuclear Information System (INIS)

    Boylan, J.G.; DeLozier, R.C.

    1982-01-01

    The systems engineering function for the Gas Centrifuge Enrichment Plant (GCEP) covers system requirements definition, analyses, verification, technical reviews, and other system efforts necessary to assure good balance of performance, safety, cost, and scheduling. The systems engineering function will support the design, installation, start-up, and operational phases of GCEP. The principal objectives of the systems engineering function are to: assure that the system requirements of the GCEP process are adequately specified and documented and that due consideration and emphasis are given to all aspects of the project; provide system analyses of the designs as they progress to assure that system requirements are met and that GCEP interfaces are compatible; assist in the definition of programs for the necessary and sufficient verification of GCEP systems; and integrate reliability, maintainability, logistics, safety, producibility, and other related specialties into a total system effort. This paper addresses the GCEP reliability, availability, maintainability, and dollars (RAM dollars) analyses which are the primary systems engineering tools for the development and implementation of trade-off studies. These studies are basic to reaching cost-effective project decisions. The steps necessary to achieve optimum cost-effective design are shown

  10. Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly.

    Directory of Open Access Journals (Sweden)

    Robert M Brouillette

    Full Text Available While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, p<0.0001 and multiple measures of processing speed and attention (Digit Span: r = 0.427, p<0.0001; Trail Making Test: r = -0.651, p<0.00001; Digit Symbol Test: r = 0.508, p<0.0001. The CST was not correlated with naming and verbal fluency tasks (Boston Naming Test, Vegetable/Animal Naming or memory tasks (Logical Memory Test. Test re-test reliability was observed to be significant (r = 0.726; p = 0.02. Together, these data are the first to demonstrate the feasibility, reliability, and validity of using a smartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries

  11. A perspective on Human Reliability Analysis (HRA) and studies on the application of HRA to Indian Pressurised Heavy Water Reactors

    International Nuclear Information System (INIS)

    Subramaniam, K.; Saraf, R.K.; Sanyasi Rao, V.V.S.; Venkat Raj, V.; Venkatraman, R.

    2000-05-01

    Probabilistic studies of risks show that the human factor contributes significantly to overall risk. The potential for and mechanisms of human error to affect plant risk and safety is evaluated by Human Reliability Analysis (HRA). HRA has quantitative and qualitative aspects, both of which are useful for Human Factors Engineering (HFE) which aims at designing operator interfaces that will minimise operator error and provide for error detection and recovery capability. HRA has therefore to be conducted as an integrated activity in support of PSA and HFE design. The objectives of HRA therefore, are to assure that potential effects on plant safety and reliability are analysed and that human actions that are important to plant risk are identified so that they can be addressed in both PSA and plant design. This report is in two parts. The first part presents a comprehensive overview of HRA. It attempts to provide an understanding of how human failures are incorporated into PSA models and how HRA is performed. The focus is on the HRA process, frameworks, techniques and models. The second part begins with a discussion on the application of HRA to IPHWRs and then continues with the presentation of three specific HRA case studies. This work was carried out by the working group on HRA constituted by AERB. Part of the work was done under the aegis of the IAEA Coordinated Research Programme (CRP) on collection and classification of human reliability data and use in PSA - Research contract No. 8239/RB. (author)

  12. Application of Bayesian Belief networks to the human reliability analysis of an oil tanker operation focusing on collision accidents

    International Nuclear Information System (INIS)

    Martins, Marcelo Ramos; Maturana, Marcos Coelho

    2013-01-01

    During the last three decades, several techniques have been developed for the quantitative study of human reliability. In the 1980s, techniques were developed to model systems by means of binary trees, which did not allow for the representation of the context in which human actions occur. Thus, these techniques cannot model the representation of individuals, their interrelationships, and the dynamics of a system. These issues make the improvement of methods for Human Reliability Analysis (HRA) a pressing need. To eliminate or at least attenuate these limitations, some authors have proposed modeling systems using Bayesian Belief Networks (BBNs). The application of these tools is expected to address many of the deficiencies in current approaches to modeling human actions with binary trees. This paper presents a methodology based on BBN for analyzing human reliability and applies this method to the operation of an oil tanker, focusing on the risk of collision accidents. The obtained model was used to determine the most likely sequence of hazardous events and thus isolate critical activities in the operation of the ship to study Internal Factors (IFs), Skills, and Management and Organizational Factors (MOFs) that should receive more attention for risk reduction.

  13. Reliability database of IEA-R1 Brazilian research reactor: Applications to the improvement of installation safety

    International Nuclear Information System (INIS)

    Oliveira, P.S.P.; Tondin, J.B.M.; Martins, M.O.; Yovanovich, M.; Ricci Filho, W.

    2010-01-01

    In this paper the main features of the reliability database being developed at Ipen-Cnen/SP for IEA-R1 reactor are briefly described. Besides that, the process for collection and updating of data regarding operation, failure and maintenance of IEA-R1 reactor components is presented. These activities have been conducted by the reactor personnel under the supervision of specialists in Probabilistic Safety Analysis (PSA). The compilation of data and subsequent calculation are based on the procedures defined during an IAEA Coordinated Research Project which Brazil took part in the period from 2001 to 2004. In addition to component reliability data, the database stores data on accident initiating events and human errors. Furthermore, this work discusses the experience acquired through the development of the reliability database covering aspects like improvements in the reactor records as well as the application of the results to the optimization of operation and maintenance procedures and to the PSA carried out for IEA-R1 reactor. (author)

  14. Application of high efficiency and reliable 3D-designed integral shrouded blades to nuclear turbines

    International Nuclear Information System (INIS)

    Watanabe, Eiichiro; Ohyama, Hiroharu; Tashiro, Hikaru; Sugitani, Toshiro; Kurosawa, Masaru

    1998-01-01

    Mitsubishi Heavy Industries, Ltd. has recently developed new blades for nuclear turbines, in order to achieve higher efficiency and higher reliability. The 3D aerodynamic design for 41 inch and 46 inch blades, their one piece structural design (integral-shrouded blades: ISB), and the verification test results using a model steam turbine are described in this paper. The predicted efficiency and lower vibratory stress have been verified. Based on these 60Hz ISB, 50Hz ISB series are under development using 'the law of similarity' without changing their thermodynamic performance and mechanical stress levels. Our 3D-designed reaction blades which are used for the high pressure and low pressure upstream stages, are also briefly mentioned. (author)

  15. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...... the probabilistic interval-valued quantities of interest. It is proven that the optimisation problem can be translated into another problem statement that can be solved on the class of piecewise continuous probability density functions (pdfs). This class often consists of piecewise exponential pdfs which appear...... as soon as among the constraints there are bounds on a failure rate of a component under consideration. Finding the number of switching points of the piecewise continuous pdfs and their values becomes the focus of the approach described in the paper. Examples are provided....

  16. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    Science.gov (United States)

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2012-10-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  18. Some improvements on adaptive genetic algorithms for reliability-related applications

    International Nuclear Information System (INIS)

    Ye Zhisheng; Li Zhizhong; Xie Min

    2010-01-01

    Adaptive genetic algorithms (GAs) have been shown to be able to improve GA performance in reliability-related optimization studies. However, there are different ways to implement adaptive GAs, some of which are even in conflict with each other. In this study, a simple parameter-adjusting method using mean and variance of each generation is introduced. This method is used to compare two of such conflicting adaptive GA methods: GAs with increasing mutation rate and decreasing crossover rate and GAs with decreasing mutation rate and increasing crossover rate. The illustrative examples indicate that adaptive GAs with decreasing mutation rate and increasing crossover rate finally yield better results. Furthermore, a population disturbance method is proposed to avoid local optimum solutions. This idea is similar to exotic migration to a tribal society. To solve the problem of large solution space, a variable roughening method is also embedded into GA. Two case studies are presented to demonstrate the effectiveness of the proposed method.

  19. Some improvements on adaptive genetic algorithms for reliability-related applications

    Energy Technology Data Exchange (ETDEWEB)

    Ye Zhisheng, E-mail: yez@nus.edu.s [Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119 260 (Singapore); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, beijing 100084 (China); Xie Min [Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119 260 (Singapore)

    2010-02-15

    Adaptive genetic algorithms (GAs) have been shown to be able to improve GA performance in reliability-related optimization studies. However, there are different ways to implement adaptive GAs, some of which are even in conflict with each other. In this study, a simple parameter-adjusting method using mean and variance of each generation is introduced. This method is used to compare two of such conflicting adaptive GA methods: GAs with increasing mutation rate and decreasing crossover rate and GAs with decreasing mutation rate and increasing crossover rate. The illustrative examples indicate that adaptive GAs with decreasing mutation rate and increasing crossover rate finally yield better results. Furthermore, a population disturbance method is proposed to avoid local optimum solutions. This idea is similar to exotic migration to a tribal society. To solve the problem of large solution space, a variable roughening method is also embedded into GA. Two case studies are presented to demonstrate the effectiveness of the proposed method.

  20. A Study on the Thermomechanical Reliability Risks of Through-Silicon-Vias in Sensor Applications

    Directory of Open Access Journals (Sweden)

    Shuai Shao

    2017-02-01

    Full Text Available Reliability risks for two different types of through-silicon-vias (TSVs are discussed in this paper. The first is a partially-filled copper TSV, if which the copper layer covers the side walls and bottom. A polymer is used to fill the rest of the cavity. Stresses in risk sites are studied and ranked for this TSV structure by FEA modeling. Parametric studies for material properties (modulus and thermal expansion of TSV polymer are performed. The second type is a high aspect ratio TSV filled by polycrystalline silicon (poly Si. Potential risks of the voids in the poly Si due to filling defects are studied. Fracture mechanics methods are utilized to evaluate the risk for two different assembly conditions: package assembled to printed circuit board (PCB and package assembled to flexible substrate. The effect of board/substrate/die thickness and the size and location of the void are discussed.

  1. Commercial Off-The-Shelf (COTS) Parts Risk and Reliability User and Application Guide

    Science.gov (United States)

    White, Mark

    2017-01-01

    All COTS parts are not created equal. Because they are not created equal, the notion that one can force the commercial industry to follow a set of military specifications and standards, along with the certifications, audits and qualification commitments that go with them, is unrealistic for the sale of a few parts. The part technologies that are Defense Logistics Agency (DLA) certified or Military Specification (MS) qualified, are several generations behind the state-of-the-art high-performance parts that are required for the compact, higher performing systems for the next generation of spacecraft and instruments. The majority of the part suppliers are focused on the portion of the market that is producing high-tech commercial products and systems. To that end, in order to compete in the high performance and leading edge advanced technological systems, an alternative approach to risk assessment and reliability prediction must be considered.

  2. Can simple mobile phone applications provide reliable counts of respiratory rates in sick infants and children? An initial evaluation of three new applications.

    Science.gov (United States)

    Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick

    2015-05-01

    Respiratory rate is an important sign that is commonly either not recorded or recorded incorrectly. Mobile phone ownership is increasing even in resource-poor settings. Phone applications may improve the accuracy and ease of counting of respiratory rates. The study assessed the reliability and initial users' impressions of four mobile phone respiratory timer approaches, compared to a 60-second count by the same participants. Three mobile applications (applying four different counting approaches plus a standard 60-second count) were created using the Java Mobile Edition and tested on Nokia C1-01 phones. Apart from the 60-second timer application, the others included a counter based on the time for ten breaths, and three based on the time interval between breaths ('Once-per-Breath', in which the user presses for each breath and the application calculates the rate after 10 or 20 breaths, or after 60s). Nursing and physiotherapy students used the applications to count respiratory rates in a set of brief video recordings of children with different respiratory illnesses. Limits of agreement (compared to the same participant's standard 60-second count), intra-class correlation coefficients and standard errors of measurement were calculated to compare the reliability of the four approaches, and a usability questionnaire was completed by the participants. There was considerable variation in the counts, with large components of the variation related to the participants and the videos, as well as the methods. None of the methods was entirely reliable, with no limits of agreement better than -10 to +9 breaths/min. Some of the methods were superior to the others, with ICCs from 0.24 to 0.92. By ICC the Once-per-Breath 60-second count and the Once-per-Breath 20-breath count were the most consistent, better even than the 60-second count by the participants. The 10-breath approaches performed least well. Users' initial impressions were positive, with little difference between the

  3. Time-variant coherence between heart rate variability and EEG activity in epileptic patients: an advanced coupling analysis between physiological networks

    International Nuclear Information System (INIS)

    Piper, D; Schiecke, K; Pester, B; Witte, H; Benninger, F; Feucht, M

    2014-01-01

    Time-variant coherence analysis between the heart rate variability (HRV) and the channel-related envelopes of adaptively selected EEG components was used as an indicator for the occurrence of (correlative) couplings between the central autonomic network (CAN) and the epileptic network before, during and after epileptic seizures. Two groups of patients were investigated, a group with left and a group with right hemispheric temporal lobe epilepsy. The individual EEG components were extracted by a signal-adaptive approach, the multivariate empirical mode decomposition, and the envelopes of each resulting intrinsic mode function (IMF) were computed by using Hilbert transform. Two IMFs, whose envelopes were strongly correlated with the HRV’s low-frequency oscillation (HRV-LF; ≈0.1 Hz) before and after the seizure were identified. The frequency ranges of these IMFs correspond to the EEG delta-band. The time-variant coherence was statistically quantified and tensor decomposition of the time-frequency coherence maps was applied to explore the topography-time-frequency characteristics of the coherence analysis. Results allow the hypothesis that couplings between the CAN, which controls the cardiovascular-cardiorespiratory system, and the ‘epileptic neural network’ exist. Additionally, our results confirm the hypothesis of a right hemispheric lateralization of sympathetic cardiac control of the HRV-LF. (paper)

  4. Interference Cancellation Using Replica Signal for HTRCI-MIMO/OFDM in Time-Variant Large Delay Spread Longer Than Guard Interval

    Directory of Open Access Journals (Sweden)

    Yuta Ida

    2012-01-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM and multiple-input multiple-output (MIMO are generally known as the effective techniques for high data rate services. In MIMO/OFDM systems, the channel estimation (CE is very important to obtain an accurate channel state information (CSI. However, since the orthogonal pilot-based CE requires the large number of pilot symbols, the total transmission rate is degraded. To mitigate this problem, a high time resolution carrier interferometry (HTRCI for MIMO/OFDM has been proposed. In wireless communication systems, if the maximum delay spread is longer than the guard interval (GI, the system performance is significantly degraded due to the intersymbol interference (ISI and intercarrier interference (ICI. However, the conventional HTRCI-MIMO/OFDM does not consider the case with the time-variant large delay spread longer than the GI. In this paper, we propose the ISI and ICI compensation methods for a HTRCI-MIMO/OFDM in the time-variant large delay spread longer than the GI.

  5. Application of simple approximate system analysis methods for reliability and availability improvement of reactor WWER-1000

    International Nuclear Information System (INIS)

    Manchev, B.; Marinova, B.; Nenkova, B.

    2001-01-01

    The method described on this report provides a set of simple, easily understood 'approximate' models applicable to a large class of system architectures. Constructing a Markov model of each redundant subsystem and its replacement after that by a pseudo-component develops the approximation models. Of equal importance, the models can be easily understood even of non-experts, including managers, high-level decision-makers and unsophisticated consumers. A necessary requirement for their application is the systems to be repairable and the mean time to repair to be much smaller than the mean time to failure. This ia a case most often met in the real practice. Results of the 'approximate' model application on a technological system of Kozloduy NPP are also presented. The results obtained can be compared quite favorably with the results obtained by using SAPHIRE software

  6. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  7. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    International Nuclear Information System (INIS)

    Barriere, M.T.; Luckas, W.J.; Wreathall, J.; Cooper, S.E.; Bley, D.C.; Ramey-Smith, A.

    1995-08-01

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC's Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed

  8. Tumor Heterogeneity: Mechanisms and Bases for a Reliable Application of Molecular Marker Design

    Science.gov (United States)

    Diaz-Cano, Salvador J.

    2012-01-01

    Tumor heterogeneity is a confusing finding in the assessment of neoplasms, potentially resulting in inaccurate diagnostic, prognostic and predictive tests. This tumor heterogeneity is not always a random and unpredictable phenomenon, whose knowledge helps designing better tests. The biologic reasons for this intratumoral heterogeneity would then be important to understand both the natural history of neoplasms and the selection of test samples for reliable analysis. The main factors contributing to intratumoral heterogeneity inducing gene abnormalities or modifying its expression include: the gradient ischemic level within neoplasms, the action of tumor microenvironment (bidirectional interaction between tumor cells and stroma), mechanisms of intercellular transference of genetic information (exosomes), and differential mechanisms of sequence-independent modifications of genetic material and proteins. The intratumoral heterogeneity is at the origin of tumor progression and it is also the byproduct of the selection process during progression. Any analysis of heterogeneity mechanisms must be integrated within the process of segregation of genetic changes in tumor cells during the clonal expansion and progression of neoplasms. The evaluation of these mechanisms must also consider the redundancy and pleiotropism of molecular pathways, for which appropriate surrogate markers would support the presence or not of heterogeneous genetics and the main mechanisms responsible. This knowledge would constitute a solid scientific background for future therapeutic planning. PMID:22408433

  9. Research on application of technique for analyzing system reliability, GO-FLOW

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Fukuto, Junji; Sugasawa, Shinobu; Mitomo, Nobuo; Miyazaki, Keiko; Hirao, Yoshihiro; Kobayashi, Michiyuki

    1997-01-01

    As the method of evaluation, probabilistic safety assessment (PSA) has been introduced in nuclear power field, and began to play important role in plant design and safety examination. In the Ship Research Institute, as the technique for analyzing system reliability which takes the main part of PSA, the research on developing the GO-FLOW technique which has various advanced functions has been carried out. In this research, the functions of the GO-FLOW technique are improved, and the function of the dynamic behavior analysis for systems and the analysis function for the combination of the physical behavior of systems and the change of probabilistic events are developed, further, the function of extracting main accident sequence by utilizing the GO-FLOW technique is prepared. As for the analysis of dynamic behavior, the sample problem on hold-up tank was investigated. As to the extraction of main accident sequence, the fundamental part of the function of event tree analysis was consolidated, and the function of setting branching probability was given. As to the indication of plant behavior, the simulator for improved marine reactor MRX was developed. (K.I.)

  10. Nordic perspectives on safety management in high reliability organizations: Theory and applications

    International Nuclear Information System (INIS)

    Svenson, Ola; Salo, I.; Sjerve, A.B.; Reiman, T.; Oedewald, P.

    2006-04-01

    The chapters in this volume are written on a stand-alone basis meaning that the chapters can be read in any order. The first 4 chapters focus on theory and method in general with some applied examples illustrating the methods and theories. Chapters 5 and 6 are about safety management in the aviation industry with some additional information about incident reporting in the aviation industry and the health care sector. Chapters 7 through 9 cover safety management with applied examples from the nuclear power industry and with considerable validity for safety management in any industry. Chapters 10 through 12 cover generic safety issues with examples from the oil industry and chapter 13 presents issues related to organizations with different internal organizational structures. Although the many of the chapters use a specific industry to illustrate safety management, the messages in all the chapters are of importance for safety management in any high reliability industry or risky activity. The interested reader is also referred to, e.g., a document by an international NEA group (SEGHOF), who is about to publish a state of the art report on Systematic Approaches to Safety Management (cf., CSNI/NEA/SEGHOF, home page: www.nea.fr). (au)

  11. Nordic perspectives on safety management in high reliability organizations: Theory and applications

    Energy Technology Data Exchange (ETDEWEB)

    Svenson, Ola; Salo, I; Sjerve, A B; Reiman, T; Oedewald, P [Stockholm Univ. (Sweden)

    2006-04-15

    The chapters in this volume are written on a stand-alone basis meaning that the chapters can be read in any order. The first 4 chapters focus on theory and method in general with some applied examples illustrating the methods and theories. Chapters 5 and 6 are about safety management in the aviation industry with some additional information about incident reporting in the aviation industry and the health care sector. Chapters 7 through 9 cover safety management with applied examples from the nuclear power industry and with considerable validity for safety management in any industry. Chapters 10 through 12 cover generic safety issues with examples from the oil industry and chapter 13 presents issues related to organizations with different internal organizational structures. Although the many of the chapters use a specific industry to illustrate safety management, the messages in all the chapters are of importance for safety management in any high reliability industry or risky activity. The interested reader is also referred to, e.g., a document by an international NEA group (SEGHOF), who is about to publish a state of the art report on Systematic Approaches to Safety Management (cf., CSNI/NEA/SEGHOF, home page: www.nea.fr). (au)

  12. Practical solutions for multi-objective optimization: An application to system reliability design problems

    International Nuclear Information System (INIS)

    Taboada, Heidi A.; Baheranwala, Fatema; Coit, David W.; Wattanapongsakorn, Naruemon

    2007-01-01

    For multiple-objective optimization problems, a common solution methodology is to determine a Pareto optimal set. Unfortunately, these sets are often large and can become difficult to comprehend and consider. Two methods are presented as practical approaches to reduce the size of the Pareto optimal set for multiple-objective system reliability design problems. The first method is a pseudo-ranking scheme that helps the decision maker select solutions that reflect his/her objective function priorities. In the second approach, we used data mining clustering techniques to group the data by using the k-means algorithm to find clusters of similar solutions. This provides the decision maker with just k general solutions to choose from. With this second method, from the clustered Pareto optimal set, we attempted to find solutions which are likely to be more relevant to the decision maker. These are solutions where a small improvement in one objective would lead to a large deterioration in at least one other objective. To demonstrate how these methods work, the well-known redundancy allocation problem was solved as a multiple objective problem by using the NSGA genetic algorithm to initially find the Pareto optimal solutions, and then, the two proposed methods are applied to prune the Pareto set

  13. Application of Vibration and Oil Analysis for Reliability Information on Helicopter Main Rotor Gearbox

    Science.gov (United States)

    Murrad, Muhamad; Leong, M. Salman

    Based on the experiences of the Malaysian Armed Forces (MAF), failure of the main rotor gearbox (MRGB) was one of the major contributing factors to helicopter breakdowns. Even though vibration and oil analysis are the effective techniques for monitoring the health of helicopter components, these two techniques were rarely combined to form an effective assessment tool in MAF. Results of the oil analysis were often used only for oil changing schedule while assessments of MRGB condition were mainly based on overall vibration readings. A study group was formed and given a mandate to improve the maintenance strategy of S61-A4 helicopter fleet in the MAF. The improvement consisted of a structured approach to the reassessment/redefinition suitable maintenance actions that should be taken for the MRGB. Basic and enhanced tools for condition monitoring (CM) are investigated to address the predominant failures of the MRGB. Quantitative accelerated life testing (QALT) was considered in this work with an intent to obtain the required reliability information in a shorter time with tests under normal stress conditions. These tests when performed correctly can provide valuable information about MRGB performance under normal operating conditions which enable maintenance personnel to make decision more quickly, accurately and economically. The time-to-failure and probability of failure information of the MRGB were generated by applying QALT analysis principles. This study is anticipated to make a dramatic change in its approach to CM, bringing significant savings and various benefits to MAF.

  14. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Barriere, M.T.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Bley, D.C. [PLG, Inc., Newport Beach, CA (United States); Ramey-Smith, A. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-08-01

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC`s Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed.

  15. Tumor Heterogeneity: Mechanisms and Bases for a Reliable Application of Molecular Marker Design

    Directory of Open Access Journals (Sweden)

    Salvador J. Diaz-Cano

    2012-02-01

    Full Text Available Tumor heterogeneity is a confusing finding in the assessment of neoplasms, potentially resulting in inaccurate diagnostic, prognostic and predictive tests. This tumor heterogeneity is not always a random and unpredictable phenomenon, whose knowledge helps designing better tests. The biologic reasons for this intratumoral heterogeneity would then be important to understand both the natural history of neoplasms and the selection of test samples for reliable analysis. The main factors contributing to intratumoral heterogeneity inducing gene abnormalities or modifying its expression include: the gradient ischemic level within neoplasms, the action of tumor microenvironment (bidirectional interaction between tumor cells and stroma, mechanisms of intercellular transference of genetic information (exosomes, and differential mechanisms of sequence-independent modifications of genetic material and proteins. The intratumoral heterogeneity is at the origin of tumor progression and it is also the byproduct of the selection process during progression. Any analysis of heterogeneity mechanisms must be integrated within the process of segregation of genetic changes in tumor cells during the clonal expansion and progression of neoplasms. The evaluation of these mechanisms must also consider the redundancy and pleiotropism of molecular pathways, for which appropriate surrogate markers would support the presence or not of heterogeneous genetics and the main mechanisms responsible. This knowledge would constitute a solid scientific background for future therapeutic planning.

  16. Materials technology and the energy problem : application to the reliability and safety of nuclear pressure vessels

    International Nuclear Information System (INIS)

    Garrett, G.G.

    1975-01-01

    In the U.S.A. over the past few months, widespread plant shutdowns because of cracking problems has produced considerable public pressure for a reappraisal of the reliability and safety of nuclear reactors. The awareness of such problems, and their solution, is particularly relevant to South Africa at this time. Some materials problems related to nuclear plant failure are examined in this paper. Since catastrophic failure (without prior warning from slow leakage) is in principle possible for light water (pressurised) reactors under operating conditions, it is essential to maintain rigorous manufacturing and quality control procedures, in conjunction with thorough and frequent examination by non-destructive testing methods. Although tests currently in progress in the U.S.A. on large-scale model reactors suggest that mathematical stress and failure analyses, for simple geometries at least, are sound, current in situ surveillance programmes aimed at categorizing the effects of irradiation are inadequate. In addition, the effects on materials properties and subsequent fracture resistance of the combined effects of irradiation and thermal shock (arising from the injection of emergency cooling water during a loss-of coolant accident) are unknown. The problem of stress corrosion cracking in stainless steel pipelines is considerable, and at present virtually impossible to predict. Much of the available laboratory data is inapplicable in that it cannot account for the complex interactions of stress state, temperature, material variations and segregation effects, and water chemistry, especially in conjunction with irradiation effects, that are experienced in an operating environment

  17. Application of material databases for improved reliability of reactor pressure vessels

    International Nuclear Information System (INIS)

    Griesbach, T.J.; Server, W.L.; Beaudoin, B.F.; Burgos, B.N.

    1994-01-01

    A vital part of reactor vessel Life Cycle Management program must begin with an accurate characterization of the vessel material properties. Uncertainties in vessel material properties or use of bounding values may result in unnecessary conservatisms in vessel integrity calculations. These conservatisms may be eliminated through a better understanding of the material properties in reactor vessels, both in the unirradiated and irradiated conditions. Reactor vessel material databases are available for quantifying the chemistry and Charpy shift behavior of individual heats of reactor vessel materials. Application of the databases for vessels with embrittlement concerns has proven to be an effective embrittlement management tool. This paper presents details of database development and applications which demonstrate the value of using material databases for improving material chemistry and for maximizing the data from integrated material surveillance programs

  18. GOC-TX: A Reliable Ticket Synchronization Application for the Open Science Grid

    Science.gov (United States)

    Hayashi, Soichi; Gopu, Arvind; Quick, Robert

    2011-12-01

    One of the major operational issues faced by large multi-institutional collaborations is permitting its users and support staff to use their native ticket tracking environment while also exchanging these tickets with collaborators. After several failed attempts at email-parser based ticket exchanges, the OSG Operations Group has designed a comprehensive ticket synchronizing application. The GOC-TX application uses web-service interfaces offered by various commercial, open source and other homegrown ticketing systems, to synchronize tickets between two or more of these systems. GOC-TX operates independently from any ticketing system. It can be triggered by one ticketing system via email, active messaging, or a web-services call to check for current sync-status, pull applicable recent updates since prior synchronizations to the source ticket, and apply the updates to a destination ticket. The currently deployed production version of GOC-TX is able to synchronize tickets between the Numara Footprints ticketing system used by the OSG and the following systems: European Grid Initiative's system Global Grid User Support (GGUS) and the Request Tracker (RT) system used by Brookhaven. Additional interfaces to the BMC Remedy system used by Fermilab, and to other instances of RT used by other OSG partners, are expected to be completed in summer 2010. A fully configurable open source version is expected to be made available by early autumn 2010. This paper will cover the structure of the GOC-TX application, its evolution, and the problems encountered by OSG Operations group with ticket exchange within the OSG Collaboration.

  19. GOC-TX: A Reliable Ticket Synchronization Application for the Open Science Grid

    International Nuclear Information System (INIS)

    Hayashi, Soichi; Gopu, Arvind; Quick, Robert

    2011-01-01

    One of the major operational issues faced by large multi-institutional collaborations is permitting its users and support staff to use their native ticket tracking environment while also exchanging these tickets with collaborators. After several failed attempts at email-parser based ticket exchanges, the OSG Operations Group has designed a comprehensive ticket synchronizing application. The GOC-TX application uses web-service interfaces offered by various commercial, open source and other homegrown ticketing systems, to synchronize tickets between two or more of these systems. GOC-TX operates independently from any ticketing system. It can be triggered by one ticketing system via email, active messaging, or a web-services call to check for current sync-status, pull applicable recent updates since prior synchronizations to the source ticket, and apply the updates to a destination ticket. The currently deployed production version of GOC-TX is able to synchronize tickets between the Numara Footprints ticketing system used by the OSG and the following systems: European Grid Initiative's system Global Grid User Support (GGUS) and the Request Tracker (RT) system used by Brookhaven. Additional interfaces to the BMC Remedy system used by Fermilab, and to other instances of RT used by other OSG partners, are expected to be completed in summer 2010. A fully configurable open source version is expected to be made available by early autumn 2010. This paper will cover the structure of the GOC-TX application, its evolution, and the problems encountered by OSG Operations group with ticket exchange within the OSG Collaboration.

  20. Study on application of a high-speed trigger-type SFCL (TSFCL) for interconnection of power systems with different reliabilities

    International Nuclear Information System (INIS)

    Kim, Hye Ji; Yoon, Yong Tae

    2016-01-01

    Highlights: • Application of TSFCL to interconnect systems with different reliabilities is proposed. • TSFCL protects a grid by preventing detrimental effects from being delivered through the interconnection line. • A high-speed TSFCL with high impedance for transmission systems is required to be developed. - Abstract: Interconnection of power systems is one effective way to improve power supply reliability. However, differences in the reliability of each power system create a greater obstacle for the stable interconnection of power systems, as after interconnection a high-reliability system is affected by frequent faults in low reliability side systems. Several power system interconnection methods, such as the back-to-back method and the installation of either transformers or series reactors, have been investigated to counteract the damage caused by faults in the other neighboring systems. However, these methods are uneconomical and require complex operational management plans. In this work, a high-speed trigger-type superconducting fault current limiter (TSFCL) with large-impedance is proposed as a solution to maintain reliability and power quality when a high reliability power system is interconnected with a low reliability power system. Through analysis of the reliability index for the numerical examples obtained from a PSCAD/EMTDC simulator, a high-speed TSFCL with a large-impedance is confirmed to be effective for the interconnection between power systems with different reliabilities.

  1. Reliability Analysis of a Steel Frame

    Directory of Open Access Journals (Sweden)

    M. Sýkora

    2002-01-01

    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  2. Reliability and validity of a smartphone pulse rate application for the assessment of resting and elevated pulse rate.

    Science.gov (United States)

    Mitchell, Katy; Graff, Megan; Hedt, Corbin; Simmons, James

    2016-08-01

    Purpose/hypothesis: This study was designed to investigate the test-retest reliability, concurrent validity, and the standard error of measurement (SEm) of a pulse rate assessment application (Azumio®'s Instant Heart Rate) on both Android® and iOS® (iphone operating system) smartphones as compared to a FT7 Polar® Heart Rate monitor. Number of subjects: 111. Resting (sitting) pulse rate was assessed twice and then the participants were asked to complete a 1-min standing step test and then immediately re-assessed. The smartphone assessors were blinded to their measurements. Test-retest reliability (intraclass correlation coefficient [ICC 2,1] and 95% confidence interval) for the three tools at rest (time 1/time 2): iOS® (0.76 [0.67-0.83]); Polar® (0.84 [0.78-0.89]); and Android® (0.82 [0.75-0.88]). Concurrent validity at rest time 2 (ICC 2,1) with the Polar® device: IOS® (0.92 [0.88-0.94]) and Android® (0.95 [0.92-0.96]). Concurrent validity post-exercise (time 3) (ICC) with the Polar® device: iOS® (0.90 [0.86-0.93]) and Android® (0.94 [0.91-0.96]). The SEm values for the three devices at rest: iOS® (5.77 beats per minute [BPM]), Polar® (4.56 BPM) and Android® (4.96 BPM). The Android®, iOS®, and Polar® devices showed acceptable test-retest reliability at rest and post-exercise. Both the smartphone platforms demonstrated concurrent validity with the Polar® at rest and post-exercise. The Azumio® Instant Heart Rate application when used by either platform appears to be a reliable and valid tool to assess pulse rate in healthy individuals.

  3. Reliability of Smartphone-Based Instant Messaging Application for Diagnosis, Classification, and Decision-making in Pediatric Orthopedic Trauma.

    Science.gov (United States)

    Stahl, Ido; Katsman, Alexander; Zaidman, Michael; Keshet, Doron; Sigal, Amit; Eidelman, Mark

    2017-07-11

    Smartphones have the ability to capture and send images, and their use has become common in the emergency setting for transmitting radiographic images with the intent to consult an off-site specialist. Our objective was to evaluate the reliability of smartphone-based instant messaging applications for the evaluation of various pediatric limb traumas, as compared with the standard method of viewing images of a workstation-based picture archiving and communication system (PACS). X-ray images of 73 representative cases of pediatric limb trauma were captured and transmitted to 5 pediatric orthopedic surgeons by the Whatsapp instant messaging application on an iPhone 6 smartphone. Evaluators were asked to diagnose, classify, and determine the course of treatment for each case over their personal smartphones. Following a 4-week interval, revaluation was conducted using the PACS. Intraobserver agreement was calculated for overall agreement and per fracture site. The overall results indicate "near perfect agreement" between interpretations of the radiographs on smartphones compared with computer-based PACS, with κ of 0.84, 0.82, and 0.89 for diagnosis, classification, and treatment planning, respectively. Looking at the results per fracture site, we also found substantial to near perfect agreement. Smartphone-based instant messaging applications are reliable for evaluation of a wide range of pediatric limb fractures. This method of obtaining an expert opinion from the off-site specialist is immediately accessible and inexpensive, making smartphones a powerful tool for doctors in the emergency department, primary care clinics, or remote medical centers, enabling timely and appropriate treatment for the injured child. This method is not a substitution for evaluation of the images in the standard method over computer-based PACS, which should be performed before final decision-making.

  4. The engine maintenance scheduling by using reliability centered maintenance method and the identification of 5S application in PT. XYZ

    Science.gov (United States)

    Sembiring, N.; Panjaitan, N.; Saragih, A. F.

    2018-02-01

    PT. XYZ is a manufacturing company that produces fresh fruit bunches (FFB) to Crude Palm Oil (CPO) and Palm Kernel Oil (PKO). PT. XYZ consists of six work stations: receipt station, sterilizing station, thressing station, pressing station, clarification station, and kernelery station. So far, the company is still implementing corrective maintenance maintenance system for production machines where the machine repair is done after damage occurs. Problems at PT. XYZ is the absence of scheduling engine maintenance in a planned manner resulting in the engine often damaged which can disrupt the smooth production. Another factor that is the problem in this research is the kernel station environment that becomes less convenient for operators such as there are machines and equipment not used in the production area, slippery, muddy, scattered fibers, incomplete use of PPE, and lack of employee discipline. The most commonly damaged machine is in the seed processing station (kernel station) which is cake breaker conveyor machine. The solution of this problem is to propose a schedule plan for maintenance of the machine by using the method of reliability centered maintenance and also the application of 5S. The result of the application of Reliability Centered maintenance method is obtained four components that must be treated scheduled (time directed), namely: for bearing component is 37 days, gearbox component is 97 days, CBC pen component is 35 days and conveyor pedal component is 32 days While after identification the application of 5S obtained the proposed corporate environmental improvement measures in accordance with the principles of 5S where unused goods will be moved from the production area, grouping goods based on their use, determining the procedure of cleaning the production area, conducting inspection in the use of PPE, and making 5S slogans.

  5. Application of RFID to High-Reliability Nuclear Power Plant Construction

    International Nuclear Information System (INIS)

    Kenji Akagi; Masayuki Ishiwata; Kenji Araki; Jun-ichi Kawahata

    2006-01-01

    In nuclear power plant construction, countless variety of parts, products, and jigs more than one million are treated under construction. Furthermore, strict traceability to the history of material, manufacturing, and installation is required for all products from the start to finish of the construction, which enforce much workforce and many costs at every project. In an addition, the operational efficiency improvement is absolutely essential for the effective construction to reduce the initial investment for construction. As one solution, RFID (Radio Frequent Identification) application technology, one of the fundamental technologies to realize a ubiquitous society, currently expands its functionality and general versatility at an accelerating pace in mass-production industry. Hitachi believes RFID technology can be useful of one of the key solutions for the issues in non-mass production industry as well. Under this situation, Hitachi initiated the development of next generation plant concept (ubiquitous plant construction technology) which utilizes information and RFID technologies. In this paper, our application plans of RFID technology to nuclear power is described. (authors)

  6. Developing reliable safeguards seals for application verification and removal by State operators

    Energy Technology Data Exchange (ETDEWEB)

    Finch, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smartt, Heidi A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Haddal, Risa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    Once a geological repository has begun operations, the encapsulation and disposal of spent fuel will be performed as a continuous, industrial-scale series of processes, during which time safeguards seals will be applied to transportation casks before shipment from an encapsulation plant, and then verified and removed following receipt at the repository. These operations will occur approximately daily during several decades of Sweden's repository operation; however, requiring safeguards inspectors to perform the application, verification, and removal of every seal would be an onerous burden on International Atomic Energy Agency's (IAEA's) resources. Current IAEA practice includes allowing operators to either apply seals or remove them, but not both, so the daily task of either applying or verifying and removing would still require continuous presence of IAEA inspectors at one site at least. Of special importance is the inability to re-verify cask or canisters from which seals have been removed and the canisters emplaced underground. Successfully designing seals that can be applied, verified and removed by an operator with IAEA approval could impact more than repository shipments, but other applications as well, potentially reducing inspector burdens for a wide range of such duties.

  7. Application of RFID to High-Reliability Nuclear Power Plant Construction

    Energy Technology Data Exchange (ETDEWEB)

    Akagi, Kenji; Ishiwata, Masayuki; Araki, Kenji; Kawahata, Jun-ichi [Hitachi, Ltd. (Japan)

    2006-07-01

    In nuclear power plant construction, countless variety of parts, products, and jigs more than one million are treated under construction. Furthermore, strict traceability to the history of material, manufacturing, and installation is required for all products from the start to finish of the construction, which enforce much workforce and many costs at every project. In an addition, the operational efficiency improvement is absolutely essential for the effective construction to reduce the initial investment for construction. As one solution, RFID (Radio Frequent Identification) application technology, one of the fundamental technologies to realize a ubiquitous society, currently expands its functionality and general versatility at an accelerating pace in mass-production industry. Hitachi believes RFID technology can be useful of one of the key solutions for the issues in non-mass production industry as well. Under this situation, Hitachi initiated the development of next generation plant concept (ubiquitous plant construction technology) which utilizes information and RFID technologies. In this paper, our application plans of RFID technology to nuclear power is described. (authors)

  8. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    Science.gov (United States)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this

  9. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  10. A Collaborative Reasoning Maintenance System for a Reliable Application of Legislations

    Science.gov (United States)

    Tamisier, Thomas; Didry, Yoann; Parisot, Olivier; Feltz, Fernand

    Decision support systems are nowadays used to disentangle all kinds of intricate situations and perform sophisticated analysis. Moreover, they are applied in areas where the knowledge can be heterogeneous, partially un-formalized, implicit, or diffuse. The representation and management of this knowledge become the key point to ensure the proper functioning of the system and keep an intuitive view upon its expected behavior. This paper presents a generic architecture for implementing knowledge-base systems used in collaborative business, where the knowledge is organized into different databases, according to the usage, persistence and quality of the information. This approach is illustrated with Cadral, a customizable automated tool built on this architecture and used for processing family benefits applications at the National Family Benefits Fund of the Grand-Duchy of Luxembourg.

  11. An approach to evaluating system well-being in engineering reliability applications

    International Nuclear Information System (INIS)

    Billinton, Roy; Fotuhi-Firuzabad, Mahmud; Aboreshaid, Saleh

    1995-01-01

    This paper presents an approach to evaluating the degree of system well-being of an engineering system. The functionality of the system is identified by healthy, marginal and risk states. The state definitions permit the inclusion of deterministic considerations in the probabilistic indices used to monitor the system well-being. A technique is developed to determine the three operating state probabilities based on minimal path concepts. The identified indices provide system engineers with additional information on the degree of system well-being in the form of system health and margin state probabilities. A basic planning objective should be to design a system such that the probabilities of the health and risk states are acceptable. The application of the technique is illustrated in this paper using a relatively simple network

  12. Reliability evaluation of the power supply of an electrical power net for safety-relevant applications

    International Nuclear Information System (INIS)

    Dominguez-Garcia, Alejandro D.; Kassakian, John G.; Schindall, Joel E.

    2006-01-01

    In this paper, we introduce a methodology for the dependability analysis of new automotive safety-relevant systems. With the introduction of safety-relevant electronic systems in cars, it is necessary to carry out a thorough dependability analysis of those systems to fully understand and quantify the failure mechanisms in order to improve the design. Several system level FMEAs are used to identify the different failure modes of the system and, a Markov model is constructed to quantify their probability of occurrence. A new power net architecture with application to new safety-relevant automotive systems, such as Steer-by-Wire or Brake-by-Wire, is used as a case study. For these safety-relevant loads, loss of electric power supply means loss of control of the vehicle. It is, therefore, necessary and critical to develop a highly dependable power net to ensure power to these loads under all circumstances

  13. Hydrogen-oxygen steam generator applications for increasing the efficiency, maneuverability and reliability of power production

    Science.gov (United States)

    Schastlivtsev, A. I.; Borzenko, V. I.

    2017-11-01

    The comparative feasibility study of the energy storage technologies showed good applicability of hydrogen-oxygen steam generators (HOSG) based energy storage systems with large-scale hydrogen production. The developed scheme solutions for the use of HOSGs for thermal power (TPP) and nuclear power plants (NPP), and the feasibility analysis that have been carried out have shown that their use makes it possible to increase the maneuverability of steam turbines and provide backup power supply in the event of failure of the main steam generating equipment. The main design solutions for the integration of hydrogen-oxygen steam generators into the main power equipment of TPPs and NPPs, as well as their optimal operation modes, are considered.

  14. Pred-Skin: A Fast and Reliable Web Application to Assess Skin Sensitization Effect of Chemicals.

    Science.gov (United States)

    Braga, Rodolpho C; Alves, Vinicius M; Muratov, Eugene N; Strickland, Judy; Kleinstreuer, Nicole; Trospsha, Alexander; Andrade, Carolina Horta

    2017-05-22

    Chemically induced skin sensitization is a complex immunological disease with a profound impact on quality of life and working ability. Despite some progress in developing alternative methods for assessing the skin sensitization potential of chemical substances, there is no in vitro test that correlates well with human data. Computational QSAR models provide a rapid screening approach and contribute valuable information for the assessment of chemical toxicity. We describe the development of a freely accessible web-based and mobile application for the identification of potential skin sensitizers. The application is based on previously developed binary QSAR models of skin sensitization potential from human (109 compounds) and murine local lymph node assay (LLNA, 515 compounds) data with good external correct classification rate (0.70-0.81 and 0.72-0.84, respectively). We also included a multiclass skin sensitization potency model based on LLNA data (accuracy ranging between 0.73 and 0.76). When a user evaluates a compound in the web app, the outputs are (i) binary predictions of human and murine skin sensitization potential; (ii) multiclass prediction of murine skin sensitization; and (iii) probability maps illustrating the predicted contribution of chemical fragments. The app is the first tool available that incorporates quantitative structure-activity relationship (QSAR) models based on human data as well as multiclass models for LLNA. The Pred-Skin web app version 1.0 is freely available for the web, iOS, and Android (in development) at the LabMol web portal ( http://labmol.com.br/predskin/ ), in the Apple Store, and on Google Play, respectively. We will continuously update the app as new skin sensitization data and respective models become available.

  15. Life Prediction/Reliability Data of Glass-Ceramic Material Determined for Radome Applications

    Science.gov (United States)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    Brittle materials, ceramics, are candidate materials for a variety of structural applications for a wide range of temperatures. However, the process of slow crack growth, occurring in any loading configuration, limits the service life of structural components. Therefore, it is important to accurately determine the slow crack growth parameters required for component life prediction using an appropriate test methodology. This test methodology also should be useful in determining the influence of component processing and composition variables on the slow crack growth behavior of newly developed or existing materials, thereby allowing the component processing and composition to be tailored and optimized to specific needs. Through the American Society for Testing and Materials (ASTM), the authors recently developed two test methods to determine the life prediction parameters of ceramics. The two test standards, ASTM 1368 for room temperature and ASTM C 1465 for elevated temperatures, were published in the 2001 Annual Book of ASTM Standards, Vol. 15.01. Briefly, the test method employs constant stress-rate (or dynamic fatigue) testing to determine flexural strengths as a function of the applied stress rate. The merit of this test method lies in its simplicity: strengths are measured in a routine manner in flexure at four or more applied stress rates with an appropriate number of test specimens at each applied stress rate. The slow crack growth parameters necessary for life prediction are then determined from a simple relationship between the strength and the applied stress rate. Extensive life prediction testing was conducted at the NASA Glenn Research Center using the developed ASTM C 1368 test method to determine the life prediction parameters of a glass-ceramic material that the Navy will use for radome applications.

  16. Application of Distribution-free Methods of Study for Identifying the Degree of Reliability of Ukrainian Banks

    Directory of Open Access Journals (Sweden)

    Burkina Natalia V.

    2014-03-01

    Full Text Available Bank ratings are integral elements of information infrastructure that ensure sound development of the banking business. One of the key issues that the clients of banking structures are worried about is the issue of identification of the degree of reliability and trust to the bank. As of now there are no common generally accepted methods of bank rating and the issue of bank reliability is rather problematic. The article considers a modern DEA method of economic and mathematical analysis which is a popular instrument of assessment of quality of services of different subjects and which became very popular in foreign econometric studies. The article demonstrates application of the data encapsulation method (data envelopment analysis, DEA for obtaining new methods of development of bank ratings and marks out incoming and outgoing indicators for building a DEA model as applied to the Ukrainian banking system. The authors also discuss some methodical problems that might appear when applying component indicators for ranging the subjects and offer methods of their elimination.

  17. Simple and rapid preparation of [11C]DASB with high quality and reliability for routine applications

    International Nuclear Information System (INIS)

    Haeusler, D.; Mien, L.-K.; Nics, L.; Ungersboeck, J.; Philippe, C.; Lanzenberger, R.R.; Kletter, K.; Dudczak, R.; Mitterhauser, M.; Wadsak, W.

    2009-01-01

    [ 11 C]DASB combines all major prerequisites for a successful SERT-ligand, providing excellent biological properties and in-vivo behaviour. Thus, we aimed to establish a fully automated procedure for the synthesis and purification of [ 11 C]DASB with a high degree of reliability reducing the overall synthesis time while conserving high yields and purity. The optimized [ 11 C]DASB synthesis was applied in more than 60 applications with a very low failure rate (3.2%). We obtained yields up to 8.9 GBq (average 5.3±1.6 GBq). Radiochemical yields based on [ 11 C]CH 3 I, (corrected for decay) were 66.3±6.9% with a specific radioactivity (A s ) of 86.8±24.3 GBq/μmol (both at the end of synthesis, EOS). Time consumption was kept to a minimum, resulting in 43 min from end of bombardment to release of the product after quality control. Form our data, it is evident that the presented method can be implemented for routine preparations of [ 11 C]DASB with high reliability.

  18. The transient M/G/1/0 queue: some bounds and approximations for light traffic with application to reliability

    Directory of Open Access Journals (Sweden)

    J. Ben Atkinson

    1995-01-01

    Full Text Available We consider the transient analysis of the M/G/1/0 queue, for which Pn(t denotes the probability that there are no customers in the system at time t, given that there are n(n=0,1 customers in the system at time 0. The analysis, which is based upon coupling theory, leads to simple bounds on Pn(t for the M/G/1/0 and M/PH/1/0 queues and improved bounds for the special case M/Er/1/0. Numerical results are presented for various values of the mean arrival rate λ to demonstrate the increasing accuracy of approximations based upon the above bounds in light traffic, i.e., as λ→0. An important area of application for the M/G/1/0 queue is as a reliability model for a single repairable component. Since most practical reliability problems have λ values that are small relative to the mean service rate, the approximations are potentially useful in that context. A duality relation between the M/G/1/0 and GI/M/1/0 queues is also described.

  19. Simple and rapid preparation of [{sup 11}C]DASB with high quality and reliability for routine applications

    Energy Technology Data Exchange (ETDEWEB)

    Haeusler, D.; Mien, L.-K. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Pharmaceutical Technology and Biopharmaceutics, University of Vienna, A-1090 Vienna (Austria); Nics, L. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Nutritional Sciences, University of Vienna, A-1090 Vienna (Austria); Ungersboeck, J. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Inorganic Chemistry, University of Vienna, A-1090 Vienna (Austria); Philippe, C. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Pharmaceutical Technology and Biopharmaceutics, University of Vienna, A-1090 Vienna (Austria); Lanzenberger, R.R. [Department of Psychiatry and Psychotherapy, Medical University of Vienna, A-1090 Vienna (Austria); Kletter, K.; Dudczak, R. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Mitterhauser, M. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Pharmaceutical Technology and Biopharmaceutics, University of Vienna, A-1090 Vienna (Austria); Hospital Pharmacy of the General Hospital of Vienna, A-1090 Vienna (Austria); Wadsak, W. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Inorganic Chemistry, University of Vienna, A-1090 Vienna (Austria)], E-mail: wolfgang.wadsak@meduniwien.ac.at

    2009-09-15

    [{sup 11}C]DASB combines all major prerequisites for a successful SERT-ligand, providing excellent biological properties and in-vivo behaviour. Thus, we aimed to establish a fully automated procedure for the synthesis and purification of [{sup 11}C]DASB with a high degree of reliability reducing the overall synthesis time while conserving high yields and purity. The optimized [{sup 11}C]DASB synthesis was applied in more than 60 applications with a very low failure rate (3.2%). We obtained yields up to 8.9 GBq (average 5.3{+-}1.6 GBq). Radiochemical yields based on [{sup 11}C]CH{sub 3}I, (corrected for decay) were 66.3{+-}6.9% with a specific radioactivity (A{sub s}) of 86.8{+-}24.3 GBq/{mu}mol (both at the end of synthesis, EOS). Time consumption was kept to a minimum, resulting in 43 min from end of bombardment to release of the product after quality control. Form our data, it is evident that the presented method can be implemented for routine preparations of [{sup 11}C]DASB with high reliability.

  20. Development of a test rig and its application for validation and reliability testing of safety-critical software

    Energy Technology Data Exchange (ETDEWEB)

    Thai, N D; McDonald, A M [Atomic Energy of Canada Ltd., Mississauga, ON (Canada)

    1996-12-31

    This paper describes a versatile test rig developed by AECL for functional testing of safety-critical software used in the process trip computers of the Wolsong CANDU stations. The description covers the hardware and software aspects of the test rig, the test language and its interpreter, and other major testing software utilities such as the test oracle, sampler and profiler. The paper also discusses the application of the rig in the final stages of testing of the process trip computer software, namely validation and reliability tests. It shows how random test cases are generated, test scripts prepared and automatically run on the test rig. The versatility of the rig is further demonstrated in other types of testing such as sub-system tests, verification of the test oracle, testing of newly-developed test script, self-test and calibration. (author). 5 tabs., 10 figs.

  1. Development of a test rig and its application for validation and reliability testing of safety-critical software

    International Nuclear Information System (INIS)

    Thai, N.D.; McDonald, A.M.

    1995-01-01

    This paper describes a versatile test rig developed by AECL for functional testing of safety-critical software used in the process trip computers of the Wolsong CANDU stations. The description covers the hardware and software aspects of the test rig, the test language and its interpreter, and other major testing software utilities such as the test oracle, sampler and profiler. The paper also discusses the application of the rig in the final stages of testing of the process trip computer software, namely validation and reliability tests. It shows how random test cases are generated, test scripts prepared and automatically run on the test rig. The versatility of the rig is further demonstrated in other types of testing such as sub-system tests, verification of the test oracle, testing of newly-developed test script, self-test and calibration. (author). 5 tabs., 10 figs

  2. Application of space and aviation technology to improve the safety and reliability of nuclear power plant operations. Final report

    International Nuclear Information System (INIS)

    1980-04-01

    This report investigates various technologies that have been developed and utilized by the aerospace community, particularly the National Aeronautics and Space Administration (NASA) and the aviation industry, that would appear to have some potential for contributing to improved operational safety and reliability at commercial nuclear power plants of the type being built and operated in the United States today. The main initiator for this study, as well as many others, was the accident at the Three Mile Island (TMI) nuclear power plant in March 1979. Transfer and application of technology developed by NASA, as well as other public and private institutions, may well help to decrease the likelihood of similar incidents in the future

  3. Comparing the treatment of uncertainty in Bayesian networks and fuzzy expert systems used for a human reliability analysis application

    International Nuclear Information System (INIS)

    Baraldi, Piero; Podofillini, Luca; Mkrtchyan, Lusine; Zio, Enrico; Dang, Vinh N.

    2015-01-01

    The use of expert systems can be helpful to improve the transparency and repeatability of assessments in areas of risk analysis with limited data available. In this field, human reliability analysis (HRA) is no exception, and, in particular, dependence analysis is an HRA task strongly based on analyst judgement. The analysis of dependence among Human Failure Events refers to the assessment of the effect of an earlier human failure on the probability of the subsequent ones. This paper analyses and compares two expert systems, based on Bayesian Belief Networks and Fuzzy Logic (a Fuzzy Expert System, FES), respectively. The comparison shows that a BBN approach should be preferred in all the cases characterized by quantifiable uncertainty in the input (i.e. when probability distributions can be assigned to describe the input parameters uncertainty), since it provides a satisfactory representation of the uncertainty and its output is directly interpretable for use within PSA. On the other hand, in cases characterized by very limited knowledge, an analyst may feel constrained by the probabilistic framework, which requires assigning probability distributions for describing uncertainty. In these cases, the FES seems to lead to a more transparent representation of the input and output uncertainty. - Highlights: • We analyse treatment of uncertainty in two expert systems. • We compare a Bayesian Belief Network (BBN) and a Fuzzy Expert System (FES). • We focus on the input assessment, inference engines and output assessment. • We focus on an application problem of interest for human reliability analysis. • We emphasize the application rather than math to reach non-BBN or FES specialists

  4. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Science.gov (United States)

    Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald

    2017-09-01

    In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  5. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Directory of Open Access Journals (Sweden)

    Chapoutier Nicolas

    2017-01-01

    Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  6. Application of probabilistic fracture mechanics to the reliability analysis of pressure-bearing reactor components

    International Nuclear Information System (INIS)

    Schmitt, W.; Roehrich, E.; Wellein, R.

    1977-01-01

    Since no failures in the primary reactor components have been reported so far, it is impossible to estimate the failure probability of those components just by means of statistics. Therefore the way of probabilistic fracture mechanics has been proposed. Here the material properties, the loads and the crack distributions are treated as statistical variables with certain distributions. From the distributions of these data probability density functions can be established for the loading of a component (e.g. the stress intensity factor) as well as for the resistance of this component (e.g. the fracture toughness). From these functions the failure probability for a given failure mode (e.g. brittle fracture) is easily obtained either by the application of direct integration procedures which are shortly reviewed here, or by the use of Monte Carlo techniques. The most important part of the concept is the collection of a sufficiently large amount of raw data from different sources (departments within the company or external). These data need to be processed so that they can be transformed into probability density functions. The method of data collection and processing in terms of histograms, plots of probability density functions etc, is described. The choice of the various types of distribution functions is discussed. As an example the derivation of the probability density function for cracks of a given size in a component is presented. (Auth.)

  7. JUPITER: Joint Universal Parameter IdenTification and Evaluation of Reliability - An Application Programming Interface (API) for Model Analysis

    Science.gov (United States)

    Banta, Edward R.; Poeter, Eileen P.; Doherty, John E.; Hill, Mary C.

    2006-01-01

    he Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER API) improves the computer programming resources available to those developing applications (computer programs) for model analysis.The JUPITER API consists of eleven Fortran-90 modules that provide for encapsulation of data and operations on that data. Each module contains one or more entities: data, data types, subroutines, functions, and generic interfaces. The modules do not constitute computer programs themselves; instead, they are used to construct computer programs. Such computer programs are called applications of the API. The API provides common modeling operations for use by a variety of computer applications.The models being analyzed are referred to here as process models, and may, for example, represent the physics, chemistry, and(or) biology of a field or laboratory system. Process models commonly are constructed using published models such as MODFLOW (Harbaugh et al., 2000; Harbaugh, 2005), MT3DMS (Zheng and Wang, 1996), HSPF (Bicknell et al., 1997), PRMS (Leavesley and Stannard, 1995), and many others. The process model may be accessed by a JUPITER API application as an external program, or it may be implemented as a subroutine within a JUPITER API application . In either case, execution of the model takes place in a framework designed by the application programmer. This framework can be designed to take advantage of any parallel processing capabilities possessed by the process model, as well as the parallel-processing capabilities of the JUPITER API.Model analyses for which the JUPITER API could be useful include, for example: Compare model results to observed values to determine how well the model reproduces system processes and characteristics.Use sensitivity analysis to determine the information provided by observations to parameters and predictions of interest.Determine the additional data needed to improve selected model

  8. Application of probabilistic fracture mechanics to the reliability analysis of pressure-bearing reactor components

    International Nuclear Information System (INIS)

    Schmitt, W.; Roehrich, E.; Wellein, R.

    1977-01-01

    Since no failures in the primary reactor components have been reported so far, it is impossible to estimate the failure probability of those components just by means of statistics. Therefore the way of probabilistic fracture mechanics has been proposed. Here the material properties, the loads and the crack distributions are treated as statistical variables with certain distributions. From the distributions of these data probability density functions can be established for the loading of a component as well as for the resistance of this component. From these functions the failure probability for a given failure mode is easily obtained either by the application of direct integration procedures which are shortly reviewed here, or by the use of Monte Carlo techniques. The most important part of the concept is the collection of a sufficiently large amount of raw data from different sources. These data need to be processed so that they can be transformed into probability density functions. The method of data collection and processing in terms of histograms, plots of probability density functions etc. is described. The choice of the various types of distribution functions is discussed. As an example, the derivation of the probability density function for cracks of a given size in a component is presented. Here the raw data, i.e. the ultrasonic results, are transformed into real crack sizes by means of a conservative conversion rule. The true distribution of the indications is obtained by taking into account a detection probability function. The final probability density function is influenced by the fact that indications exceeding certain values need to be re

  9. Method to ensure the reliability of power semiconductors depending on the application; Verfahren zur anwendungsspezifischen Sicherstellung der Zuverlaessigkeit von Leistungshalbleiter-Bauelementen

    Energy Technology Data Exchange (ETDEWEB)

    Grieger, Folkhart; Lindemann, Andreas [Magdeburg Univ. (Germany). Inst. fuer Elektrische Energiesysteme

    2011-07-01

    Load dependent conduction and switching losses during operation heat up power semiconductor devices. They this way age; lifetime can be limited e.g. by bond wire lift-off or solder fatigue. Components thus need to be dimensioned in a way that they can be expected to reach sufficient reliability during system lifetime. Electromobility or new applications in electric transmission and distribution are demanding in this respect because of high reliability requirements and long operation times. (orig.)

  10. Methodological issues concerning the application of reliable laser particle sizing in soils

    Science.gov (United States)

    de Mascellis, R.; Impagliazzo, A.; Basile, A.; Minieri, L.; Orefice, N.; Terribile, F.

    2009-04-01

    During the past decade, the evolution of technologies has enabled laser diffraction (LD) to become a much widespread means of particle size distribution (PSD), replacing sedimentation and sieve analysis in many scientific fields mainly due to its advantages of versatility, fast measurement and high reproducibility. Despite such developments of the last decade, the soil scientist community has been quite reluctant to replace the good old sedimentation techniques (ST); possibly because of (i) the large complexity of the soil matrix inducing different types of artefacts (aggregates, deflocculating dynamics, etc.), (ii) the difficulties in relating LD results with results obtained through sedimentation techniques and (iii) the limited size range of most LD equipments. More recently LD granulometry is slowly gaining appreciation in soil science also because of some innovations including an enlarged size dynamic range (0,01-2000 m) and the ability to implement more powerful algorithms (e.g. Mie theory). Furthermore, LD PSD can be successfully used in the application of physically based pedo-transfer functions (i.e., Arya and Paris model) for investigations of soil hydraulic properties, due to the direct determination of PSD in terms of volume percentage rather than in terms of mass percentage, thus eliminating the need to adopt the rough approximation of a single value for soil particle density in the prediction process. Most of the recent LD work performed in soil science deals with the comparison with sedimentation techniques and show the general overestimation of the silt fraction following a general underestimation of the clay fraction; these well known results must be related with the different physical principles behind the two techniques. Despite these efforts, it is indeed surprising that little if any work is devoted to more basic methodological issues related to the high sensitivity of LD to the quantity and the quality of the soil samples. Our work aims to

  11. The ageing males' symptoms scale for Chinese men: reliability,validation and applicability of the Chinese version.

    Science.gov (United States)

    Kong, X-b; Guan, H-t; Li, H-g; Zhou, Y; Xiong, C-l

    2014-11-01

    In this study, the ageing males' symptoms (AMS) scale was translated into Chinese following methodological recommendations for linguistic and cultural adaptation. This study aimed to confirm the reliability, validation and applicability of the simplified Chinese version of the scale (CN-AMS) in older Chinese men, a free health screening for men older than 40 years was conducted. All participants completed a health questionnaire, which consisted of personal health information, AMS scale, the generic quality of life (QoL) instrument SF36 and the Beck Depression Inventory (BDI). The fasting blood samples of participants were collected on the day of completing the health questionnaire. Serum total testosterone (TT), albumin and sex hormone-binding globulin levels were measured and the level of free testosterone was calculated (calculated free testosterone, CFT). A total of 244 men (mean age: 52 ± 7.3 years, range: 40-79 years) were involved in the investigation and provided informed consent before their participation. The reliability of CN-AMS was analysed as internal consistency reliability (Cronbach's alpha was 0.91) as well as a 4-week-interval test-retest stability (Pearson's correlation was 0.83) and found to be good. The validation of CN-AMS was analysed as the internal structure analysis (Pearson's correlation between total score and each item score r = 0.48-0.75), total-domain-correlation (among the three domains r = 0.47-0.68, p < 0.01; domains with the total score r = 0.81-0.88, p < 0.01), and cross-validation with other scales (with SF36 r = -0.59, p < 0.01; with BDI r = 0.50, p < 0.01). Androgen deficiency (AD) was defined as the presence of three sexual symptoms (decreased frequency of morning erections, sexual thoughts and erectile dysfunction) in combination with TT < 11 nmol/L and CFT < 220 pmol/L, and the sensitivity and specificity for CN-AMS was 68.8 and 6.8% respectively. The CN-AMS had sufficient sensitivity in

  12. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Science.gov (United States)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  13. On the applicability of probabilistic analyses to assess the structural reliability of materials and components for solid-oxide fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Lara-Curzio, Edgar [ORNL; Radovic, Miladin [Texas A& M University; Luttrell, Claire R [ORNL

    2016-01-01

    The applicability of probabilistic analyses to assess the structural reliability of materials and components for solid-oxide fuel cells (SOFC) is investigated by measuring the failure rate of Ni-YSZ when subjected to a temperature gradient and comparing it with that predicted using the Ceramics Analysis and Reliability Evaluation of Structures (CARES) code. The use of a temperature gradient to induce stresses was chosen because temperature gradients resulting from gas flow patterns generate stresses during SOFC operation that are the likely to control the structural reliability of cell components The magnitude of the predicted failure rate was found to be comparable to that determined experimentally, which suggests that such probabilistic analyses are appropriate for predicting the structural reliability of materials and components for SOFCs. Considerations for performing more comprehensive studies are discussed.

  14. Theoretical basis, application, reliability, and sample size estimates of a Meridian Energy Analysis Device for Traditional Chinese Medicine Research

    Directory of Open Access Journals (Sweden)

    Ming-Yen Tsai

    Full Text Available OBJECTIVES: The Meridian Energy Analysis Device is currently a popular tool in the scientific research of meridian electrophysiology. In this field, it is generally believed that measuring the electrical conductivity of meridians provides information about the balance of bioenergy or Qi-blood in the body. METHODS AND RESULTS: PubMed database based on some original articles from 1956 to 2014 and the authoŕs clinical experience. In this short communication, we provide clinical examples of Meridian Energy Analysis Device application, especially in the field of traditional Chinese medicine, discuss the reliability of the measurements, and put the values obtained into context by considering items of considerable variability and by estimating sample size. CONCLUSION: The Meridian Energy Analysis Device is making a valuable contribution to the diagnosis of Qi-blood dysfunction. It can be assessed from short-term and long-term meridian bioenergy recordings. It is one of the few methods that allow outpatient traditional Chinese medicine diagnosis, monitoring the progress, therapeutic effect and evaluation of patient prognosis. The holistic approaches underlying the practice of traditional Chinese medicine and new trends in modern medicine toward the use of objective instruments require in-depth knowledge of the mechanisms of meridian energy, and the Meridian Energy Analysis Device can feasibly be used for understanding and interpreting traditional Chinese medicine theory, especially in view of its expansion in Western countries.

  15. Establishment of a protein frequency library and its application in the reliable identification of specific protein interaction partners.

    Science.gov (United States)

    Boulon, Séverine; Ahmad, Yasmeen; Trinkle-Mulcahy, Laura; Verheggen, Céline; Cobley, Andy; Gregor, Peter; Bertrand, Edouard; Whitehorn, Mark; Lamond, Angus I

    2010-05-01

    The reliable identification of protein interaction partners and how such interactions change in response to physiological or pathological perturbations is a key goal in most areas of cell biology. Stable isotope labeling with amino acids in cell culture (SILAC)-based mass spectrometry has been shown to provide a powerful strategy for characterizing protein complexes and identifying specific interactions. Here, we show how SILAC can be combined with computational methods drawn from the business intelligence field for multidimensional data analysis to improve the discrimination between specific and nonspecific protein associations and to analyze dynamic protein complexes. A strategy is shown for developing a protein frequency library (PFL) that improves on previous use of static "bead proteomes." The PFL annotates the frequency of detection in co-immunoprecipitation and pulldown experiments for all proteins in the human proteome. It can provide a flexible and objective filter for discriminating between contaminants and specifically bound proteins and can be used to normalize data values and facilitate comparisons between data obtained in separate experiments. The PFL is a dynamic tool that can be filtered for specific experimental parameters to generate a customized library. It will be continuously updated as data from each new experiment are added to the library, thereby progressively enhancing its utility. The application of the PFL to pulldown experiments is especially helpful in identifying either lower abundance or less tightly bound specific components of protein complexes that are otherwise lost among the large, nonspecific background.

  16. Questionnaire-based study showed that neonatal chest radiographs could be reliably interpreted using the WhatsApp messaging application.

    Science.gov (United States)

    Gross, Itai; Langer, Yshia; Pasternak, Yehonatan; Abu Ahmad, Wiessam; Eventov-Friedman, Smadar; Koplewitz, Benjamin Z

    2018-06-11

    We surveyed whether clinicians used the WhatsApp messaging application to view neonatal chest radiographs and asked a sub-sample to compare them with computer screen viewings. The study was conducted at three university-affiliated medical centres in Israel from June-December 2016. Questionnaires on using smartphones for professional purposes were completed by 68/71 paediatric residents and 20/28 neonatologists. In addition, 11 neonatologists viewed 20 chest radiographs on a computer screen followed by a smartphone and 10 viewed the same radiographs in the opposite order, separated by a washout period of two months. After another two months, five from each group viewed the same radiographs on a computer screen. Different interpretations between viewing modes were assessed. Most respondents used WhatsApp to send chest radiographs for consultation: 82% of the paediatric residents and 80% of the neonatologists. The mean number of inconsistencies in diagnosis was 3.7/20 between two computer views and 2.9/20 between computer and smartphone views (p=0.88) and the disease severity means were 3.7/20 and 2.85/20, respectively (p=0.94). Neonatologists using WhatsApp only determined umbilical line placement in 80% of cases. WhatsApp was reliable for preliminary interpretation of neonatal chest radiographs, but caution was needed when assessing umbilical lines. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Peer-review study of the draft handbook for human-reliability analysis with emphasis on nuclear-power-plant applications, NUREG/CR-1278

    Energy Technology Data Exchange (ETDEWEB)

    Brune, R. L.; Weinstein, M.; Fitzwater, M. E.

    1983-01-01

    This report describes a peer review of the draft Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications, NUREG/CR-1278. The purpose of the study was to determine to what extent peers agree with the human behavior models and estimates of human error probabilities (HEPs) contained in the Handbook. Twenty-nine human factors experts participated in the study. Twenty of the participants were Americans; nine were from other countries. The peers performed human reliability analyses of a variety of human performance scenarios describing operator activities in nuclear power plant settings. They also answered questionnaires pertaining to the contents and application of the Handbook. An analysis of peer solutions to the human reliability analysis problems and peer responses to the questionnaire was performed. Recommendations regarding the format and contents of the Handbook were developed from the study findings.

  18. Validity and reliability of an application review process using dedicated reviewers in one stage of a multi-stage admissions model.

    Science.gov (United States)

    Zeeman, Jacqueline M; McLaughlin, Jacqueline E; Cox, Wendy C

    2017-11-01

    With increased emphasis placed on non-academic skills in the workplace, a need exists to identify an admissions process that evaluates these skills. This study assessed the validity and reliability of an application review process involving three dedicated application reviewers in a multi-stage admissions model. A multi-stage admissions model was utilized during the 2014-2015 admissions cycle. After advancing through the academic review, each application was independently reviewed by two dedicated application reviewers utilizing a six-construct rubric (written communication, extracurricular and community service activities, leadership experience, pharmacy career appreciation, research experience, and resiliency). Rubric scores were extrapolated to a three-tier ranking to select candidates for on-site interviews. Kappa statistics were used to assess interrater reliability. A three-facet Many-Facet Rasch Model (MFRM) determined reviewer severity, candidate suitability, and rubric construct difficulty. The kappa statistic for candidates' tier rank score (n = 388 candidates) was 0.692 with a perfect agreement frequency of 84.3%. There was substantial interrater reliability between reviewers for the tier ranking (kappa: 0.654-0.710). Highest construct agreement occurred in written communication (kappa: 0.924-0.984). A three-facet MFRM analysis explained 36.9% of variance in the ratings, with 0.06% reflecting application reviewer scoring patterns (i.e., severity or leniency), 22.8% reflecting candidate suitability, and 14.1% reflecting construct difficulty. Utilization of dedicated application reviewers and a defined tiered rubric provided a valid and reliable method to effectively evaluate candidates during the application review process. These analyses provide insight into opportunities for improving the application review process among schools and colleges of pharmacy. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The reliability and concurrent validity of measurements used to quantify lumbar spine mobility: an analysis of an iphone® application and gravity based inclinometry.

    Science.gov (United States)

    Kolber, Morey J; Pizzini, Matias; Robinson, Ashley; Yanez, Dania; Hanney, William J

    2013-04-01

    PURPOSEAIM: This purpose of this study was to investigate the reliability, minimal detectable change (MDC), and concurrent validity of active spinal mobility measurements using a gravity-based bubble inclinometer and iPhone® application. MATERIALSMETHODS: Two investigators each used a bubble inclinometer and an iPhone® with inclinometer application to measure total thoracolumbo-pelvic flexion, isolated lumbar flexion, total thoracolumbo-pelvic extension, and thoracolumbar lateral flexion in 30 asymptomatic participants using a blinded repeated measures design. The procedures used in this investigation for measuring spinal mobility yielded good intrarater and interrater reliability with Intraclass Correlation Coefficients (ICC) for bubble inclinometry ≥ 0.81 and the iPhone® ≥ 0.80. The MDC90 for the interrater analysis ranged from 4° to 9°. The concurrent validity between bubble inclinometry and the iPhone® application was good with ICC values of ≥ 0.86. The 95% level of agreement indicates that although these measuring instruments are equivalent individual differences of up to 18° may exist when using these devices interchangeably. The bubble inclinometer and iPhone® possess good intrarater and interrater reliability as well as concurrent validity when strict measurement procedures are adhered to. This study provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying spinal mobility. Clinicians should be aware of the potential disagreement when using these devices interchangeably. 2b (Observational study of reliability).

  20. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    International Nuclear Information System (INIS)

    Burgazzi, Luciano; Pierini, Paolo

    2007-01-01

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well

  1. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    Energy Technology Data Exchange (ETDEWEB)

    Burgazzi, Luciano [ENEA-Centro Ricerche ' Ezio Clementel' , Advanced Physics Technology Division, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy)]. E-mail: burgazzi@bologna.enea.it; Pierini, Paolo [INFN-Sezione di Milano, Laboratorio Acceleratori e Superconduttivita Applicata, Via Fratelli Cervi 201, I-20090 Segrate (MI) (Italy)

    2007-04-15

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well.

  2. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  3. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  4. Zeta-potential data reliability of gold nanoparticle biomolecular conjugates and its application in sensitive quantification of surface absorbed protein.

    Science.gov (United States)

    Wang, Wenjie; Ding, Xiaofan; Xu, Qing; Wang, Jing; Wang, Lei; Lou, Xinhui

    2016-12-01

    Zeta potentials (ZP) of gold nanoparticle bioconjugates (AuNP-bios) provide important information on surface charge that is critical for many applications including drug delivery, biosensing, and cell imaging. The ZP measurements (ZPMs) are conducted under an alternative electrical field at a high frequency under laser irradiation, which may strongly affect the status of surface coating of AuNP-bios and generate unreliable data. In this study, we systemically evaluated the ZP data reliability (ZPDR) of citrate-, thiolated single stranded DNA-, and protein-coated AuNPs mainly according to the consistence of ZPs in the repeated ZPMs and the changes of the hydrodynamic size before and after the ZPMs. We found that the ZPDR was highly dependent on both buffer conditions and surface modifications. Overall, the higher ionic strength of the buffer and the lower affinity of surface bounders were related with the worse ZPDR. The ZPDR of citrate-coated AuNP was good in water, but bad in 10mM phosphate buffer (PB), showing substantially decrease of the absolute ZP values after each measurement, probably due to the electrical field facilitated adsorption of negatively charged phosphate ions on AuNPs. The significant desorption of DNAs from AuNP was observed in the PB containing medium concentration of NaCl, but not in PB. The excellent ZPDR of bovine serum albumin (BSA)-coated AuNP was observed at high salt concentrations and low surface coverage, enabling ZPM as an ultra-sensitive tool for protein quantification on the surface of AuNPs with a single molecule resolution. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  6. The importance of the reliability study for the safety operation of chemical plants. Application in heavy water plants

    International Nuclear Information System (INIS)

    Dumitrescu, Maria; Lazar, Roxana Elena; Preda, Irina Aida; Stefanescu, Ioan

    1999-01-01

    Heavy water production in Romania is based on H 2 O-H 2 S isotopic exchange process followed by vacuum isotopic distillation. The heavy water plant are complex chemical systems, characterized by an ensemble of static and dynamic equipment, AMC components, enclosures. Such equipment must have a high degree of reliability, a maximum safety in technological operation and a high availability index. Safety, reliable and economical operation heavy water plants need to maintain the systems and the components at adequate levels of reliability. The paper is a synthesis of the qualitative and quantitative assessment reliability studies for heavy water plants. The operation analysis on subsystems, each subsystems being a well-defined unit, is required by the plant complexity. For each component the reliability indicators were estimated by parametric and non-parametric methods based on the plant operation data. Also, the reliability qualitative and quantitative assessment was done using the fault tree technique. For the dual temperature isotopic exchange plants the results indicate an increase of the MTBF after the first years of operation, illustrating both the operation experience increasing and maintenance improvement. Also a high degree of availability was illustrated by the reliability studies of the vacuum distillation plant. The establishment of the reliability characteristics for heavy water plant represents an important step, a guide for highlighting the elements and process liable to failure being at the same time a planning modality to correlate the control times with the maintenance operations. This is the way to minimise maintenance, control and costs. The main purpose of the reliability study was the safety increase of the plant operation and the support for decision making. (authors)

  7. Reliability determination of aluminium electrolytic capacitors by the mean of various methods application to the protection system of the LHC

    CERN Document Server

    Perisse, F; Rojat, G

    2004-01-01

    The lifetime of power electronic components is often calculated from reliability reports, but this method can be discussed. We compare in this article the results of various reliability reports to an accelerated ageing test of component and introduced the load-strength concept. Large aluminium electrolytic capacitors are taken here in example in the context of the protection system of LHC (Large Hadron Collider) in CERN where the level of reliability is essential. We notice important differences of MTBF (Mean Time Between Failure) according to the reliability report used. Accelerating ageing tests carried out prove that a Weibull law is more adapted to determinate failure rates of components. The load-strength concept associated with accelerated ageing tests can be a solution to determine the lifetime of power electronic components.

  8. Operational safety reliability research

    International Nuclear Information System (INIS)

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant

  9. Concurrent validity and interrater reliability of a new smartphone application to assess 3D active cervical range of motion in patients with neck pain.

    Science.gov (United States)

    Stenneberg, Martijn S; Busstra, Harm; Eskes, Michel; van Trijffel, Emiel; Cattrysse, Erik; Scholten-Peeters, Gwendolijne G M; de Bie, Rob A

    2018-04-01

    There is a lack of valid, reliable, and feasible instruments for measuring planar active cervical range of motion (aCROM) and associated 3D coupling motions in patients with neck pain. Smartphones have advanced sensors and appear to be suitable for these measurements. To estimate the concurrent validity and interrater reliability of a new iPhone application for assessing planar aCROM and associated 3D coupling motions in patients with neck pain, using an electromagnetic tracking device as a reference test. Cross-sectional study. Two samples of neck pain patients were recruited; 30 patients for the validity study and 26 patients for the reliability study. Validity was estimated using intraclass correlation coefficients (ICCs), and by calculating 95% limits of agreement (LoA). To estimate interrater reliability, ICCs were calculated. Cervical 3D coupling motions were analyzed by calculating the cross-correlation coefficients and ratio between the main motions and coupled motions for both instruments. ICCs for concurrent validity and interrater reliability ranged from 0.90 to 0.99. The width of the 95% LoA ranged from about 5° for right lateral bending to 11° for total rotation. No significant differences were found between both devices for associated coupling motion analysis. The iPhone application appears to be a useful discriminative tool for the measurement of planar aCROM and associated coupling motions in patients with neck pain. It fulfills the need for a valid, reliable, and feasible instrument in clinical practice and research. Therapists and researchers should consider measurement error when interpreting scores. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  11. The reliability, minimal detectable change and concurrent validity of a gravity-based bubble inclinometer and iphone application for measuring standing lumbar lordosis.

    Science.gov (United States)

    Salamh, Paul A; Kolber, Morey

    2014-01-01

    To investigate the reliability, minimal detectable change (MDC90) and concurrent validity of a gravity-based bubble inclinometer (inclinometer) and iPhone® application for measuring standing lumbar lordosis. Two investigators used both an inclinometer and an iPhone® with an inclinometer application to measure lumbar lordosis of 30 asymptomatic participants. ICC models 3,k and 2,k were used for the intrarater and interrater analysis, respectively. Good interrater and intrarater reliability was present for the inclinometer with Intraclass Correlation Coefficients (ICC) of 0.90 and 0.85, respectively and the iPhone® application with ICC values of 0.96 and 0.81. The minimal detectable change (MDC90) indicates that a change greater than or equal to 7° and 6° is needed to exceed the threshold of error using the iPhone® and inclinometer, respectively. The concurrent validity between the two instruments was good with a Pearson product-moment coefficient of correlation (r) of 0.86 for both raters. Ninety-five percent limits of agreement identified differences ranging from 9° greater in regards to the iPhone® to 8° less regarding the inclinometer. Both the inclinometer and iPhone® application possess good interrater reliability, intrarater reliability and concurrent validity for measuring standing lumbar lordosis. This investigation provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying standing lumbar lordosis. Clinicians should recognize potential individual differences when using these devices interchangeably.

  12. Reliable allele detection using SNP-based PCR primers containing Locked Nucleic Acid: application in genetic mapping

    Directory of Open Access Journals (Sweden)

    Trognitz Friederike

    2007-02-01

    Full Text Available Abstract Background The diploid, Solanum caripense, a wild relative of potato and tomato, possesses valuable resistance to potato late blight and we are interested in the genetic base of this resistance. Due to extremely low levels of genetic variation within the S. caripense genome it proved impossible to generate a dense genetic map and to assign individual Solanum chromosomes through the use of conventional chromosome-specific SSR, RFLP, AFLP, as well as gene- or locus-specific markers. The ease of detection of DNA polymorphisms depends on both frequency and form of sequence variation. The narrow genetic background of close relatives and inbreds complicates the detection of persisting, reduced polymorphism and is a challenge to the development of reliable molecular markers. Nonetheless, monomorphic DNA fragments representing not directly usable conventional markers can contain considerable variation at the level of single nucleotide polymorphisms (SNPs. This can be used for the design of allele-specific molecular markers. The reproducible detection of allele-specific markers based on SNPs has been a technical challenge. Results We present a fast and cost-effective protocol for the detection of allele-specific SNPs by applying Sequence Polymorphism-Derived (SPD markers. These markers proved highly efficient for fingerprinting of individuals possessing a homogeneous genetic background. SPD markers are obtained from within non-informative, conventional molecular marker fragments that are screened for SNPs to design allele-specific PCR primers. The method makes use of primers containing a single, 3'-terminal Locked Nucleic Acid (LNA base. We demonstrate the applicability of the technique by successful genetic mapping of allele-specific SNP markers derived from monomorphic Conserved Ortholog Set II (COSII markers mapped to Solanum chromosomes, in S. caripense. By using SPD markers it was possible for the first time to map the S. caripense alleles

  13. Reliability and Validity Measurement of Sagittal Lumbosacral Quiet Standing Posture with a Smartphone Application in a Mixed Population of 183 College Students and Personnel

    Directory of Open Access Journals (Sweden)

    George A. Koumantakis

    2016-01-01

    Full Text Available Accurate recording of spinal posture with simple and accessible measurement devices in clinical practice may lead to spinal loading optimization in occupations related to prolonged sitting and standing postures. Therefore, the purpose of this study was to establish the level of reliability of sagittal lumbosacral posture in quiet standing and the validity of the method in differentiating between male and female subjects, establishing in parallel a normative database. 183 participants (83 males and 100 females, with no current low back or pelvic pain, were assessed using the “iHandy Level” smartphone application. Intrarater reliability (3 same-day sequential measurements was high for both the lumbar curve (ICC2,1: 0.96, SEM: 2.13°, and MDC95%: 5.9° and the sacral slope (ICC2,1: 0.97, SEM: 1.61°, and MDC95%: 4.46° sagittal alignment. Data analysis for each gender separately confirmed equally high reliability for both male and female participants. Correlation between lumbar curve and sacral slope was high (Pearson’s r=0.86, p<0.001. Between-gender comparisons confirmed the validity of the method to differentiate between male and female lumbar curve and sacral slope angles, with females generally demonstrating greater lumbosacral values (p<0.001. The “iHandy Level” application is a reliable and valid tool in the measurement of lumbosacral quiet standing spinal posture in the sagittal plane.

  14. Reliable Maintanace of Wireless Sensor Networks for Event-detection Applications%事件检测型传感器网络的可靠性维护

    Institute of Scientific and Technical Information of China (English)

    胡四泉; 杨金阳; 王俊峰

    2011-01-01

    The reliability maintannace of the wireless sensor network is a key point to keep the alarm messages delivered reliably to the monitor center on time in a event-detection application. Based on the unreliable links in the wireless sensor network and the network charateristics of an event detection application,MPRRM,a multiple path redundant reliability maintanace algoritm was proposed in this paper. Both analytical and simulation results show that the MPRRM algorithm is superior to the previous published solutions in the metrics of reliability, false positive rate, latency and message overhead.%传感器网络(Wireless Sensor Networks,WSN)的事件检测型应用中,如何通过可靠性维护来保证在检测到事件时报警信息能及时、可靠地传输到监控主机至关重要.通过对不可靠的无线链路和网络传输的分析,提出多路冗余可靠性维护算法MPRRM.通过解析方法和仿真分析证明,该算法在可靠性、误报率、延迟和消息开销量上比同类算法具有优势.

  15. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

    Energy Technology Data Exchange (ETDEWEB)

    Balan, I.

    2005-05-01

    This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been

  16. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  17. Reliability Assessment of Solder Joints in Power Electronic Modules by Crack Damage Model for Wind Turbine Applications

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2011-01-01

    Wind turbine reliability is an important issue for wind energy cost minimization, especially by reduction of operation and maintenance costs for critical components and by increasing wind turbine availability. To develop an optimal operation and maintenance plan for critical components, it is nec......Wind turbine reliability is an important issue for wind energy cost minimization, especially by reduction of operation and maintenance costs for critical components and by increasing wind turbine availability. To develop an optimal operation and maintenance plan for critical components...... to each other and they operate at a thermal-power cycling environment. Temperature loadings affect the reliability of soldered joints by developing cracks and fatigue processes that eventually result in failure. Based on Miner’s rule a linear damage model that incorporates a crack development...

  18. Quantification of submarine groundwater discharge and its short-term dynamics by linking time-variant end-member mixing analysis and isotope mass balancing (222-Rn)

    Science.gov (United States)

    Petermann, Eric; Knöller, Kay; Stollberg, Reiner; Scholten, Jan; Rocha, Carlos; Weiß, Holger; Schubert, Michael

    2017-04-01

    Submarine groundwater discharge (SGD) plays a crucial role for the water quality of coastal waters due to associated fluxes of nutrients, organic compounds and/or heavy-metals. Thus, the quantification of SGD is essential for evaluating the vulnerability of coastal water bodies with regard to groundwater pollution as well as for understanding the matter cycles of the connected water bodies. Here, we present a scientific approach for quantifying discharge of fresh groundwater (GWf) and recirculated seawater (SWrec), including its short-term temporal dynamics, into the tide-affected Knysna estuary, South Africa. For a time-variant end-member mixing analysis we conducted time-series observations of radon (222Rn) and salinity within the estuary over two tidal cycles in combination with estimates of the related end-members for seawater, river water, GWf and SWrec. The mixing analysis was treated as constrained optimization problem for finding an end-member mixing ratio that simultaneously fits the observed data for radon and salinity best for every time-step. Uncertainty of each mixing ratio was quantified by Monte Carlo simulations of the optimization procedure considering uncertainty in end-member characterization. Results reveal the highest GWf and SWrec fraction in the estuary during peak low tide with averages of 0.8 % and 1.4 %, respectively. Further, we calculated a radon mass balance that revealed a daily radon flux of 4.8 * 108 Bq into the estuary equivalent to a GWf discharge of 29.000 m3/d (9.000-59.000 m3/d for 25th-75th percentile range) and a SWrec discharge of 80.000 m3/d (45.000-130.000 m3/d for 25th-75th percentile range). The uncertainty of SGD reflects the end-member uncertainty, i.e. the spatial heterogeneity of groundwater composition. The presented approach allows the calculation of mixing ratios of multiple uncertain end-members for time-series measurements of multiple parameters. Linking these results with a tracer mass balance allows conversion

  19. Application of SAW method for multiple-criteria comparative analysis of the reliability of heat supply organizations

    Science.gov (United States)

    Akhmetova, I. G.; Chichirova, N. D.

    2016-12-01

    Heat supply is the most energy-consuming sector of the economy. Approximately 30% of all used primary fuel-and-energy resources is spent on municipal heat-supply needs. One of the key indicators of activity of heat-supply organizations is the reliability of an energy facility. The reliability index of a heat supply organization is of interest to potential investors for assessing risks when investing in projects. The reliability indices established by the federal legislation are actually reduced to a single numerical factor, which depends on the number of heat-supply outages in connection with disturbances in operation of heat networks and the volume of their resource recovery in the calculation year. This factor is rather subjective and may change in a wide range during several years. A technique is proposed for evaluating the reliability of heat-supply organizations with the use of the simple additive weighting (SAW) method. The technique for integrated-index determination satisfies the following conditions: the reliability level of the evaluated heat-supply system is represented maximum fully and objectively; the information used for the reliability-index evaluation is easily available (is located on the Internet in accordance with demands of data-disclosure standards). For reliability estimation of heat-supply organizations, the following indicators were selected: the wear of equipment of thermal energy sources, the wear of heat networks, the number of outages of supply of thermal energy (heat carrier due to technological disturbances on heat networks per 1 km of heat networks), the number of outages of supply of thermal energy (heat carrier due to technologic disturbances on thermal energy sources per 1 Gcal/h of installed power), the share of expenditures in the cost of thermal energy aimed at recovery of the resource (renewal of fixed assets), coefficient of renewal of fixed assets, and a coefficient of fixed asset retirement. A versatile program is developed

  20. Application of nonhomogeneous Poisson process to reliability analysis of repairable systems of a nuclear power plant with rates of occurrence of failures time-dependent

    International Nuclear Information System (INIS)

    Saldanha, Pedro L.C.; Simone, Elaine A. de; Melo, Paulo Fernando F.F. e

    1996-01-01

    Aging is used to mean the continuous process which physical characteristics of a system, a structure or an equipment changes with time or use. Their effects are increases in failure probabilities of a system, a structure or an equipment, and their are calculated using time-dependent failure rate models. The purpose of this paper is to present an application of the nonhomogeneous Poisson process as a model to study rates of occurrence of failures when they are time-dependent. To this application, an analysis of reliability of service water pumps of a typical nuclear power plant is made, as long as the pumps are effectively repaired components. (author)

  1. Lz-transform and inverse Lz-transform application to dynamic reliability assessment for multi-state system

    DEFF Research Database (Denmark)

    Lisnianski, A.; Ding, Y.

    2014-01-01

    The paper presents a new method for reliability assessment for complex multi-state system. The system and its components can have different performance levels ranging from perfect functioning to complete failure. Straightforward Markov method applied to solve the problem will require building of ...

  2. Towards achieving a reliable leakage detection and localization algorithm for application in water piping networks: an overview

    CSIR Research Space (South Africa)

    Adedeji, KB

    2017-09-01

    Full Text Available Leakage detection and localization in pipelines has become an important aspect of water management systems. Since monitoring leakage in large-scale water distribution networks (WDNs) is a challenging task, the need to develop a reliable and robust...

  3. Interactive reliability assessment using an integrated reliability data bank

    International Nuclear Information System (INIS)

    Allan, R.N.; Whitehead, A.M.

    1986-01-01

    The logical structure, techniques and practical application of a computer-aided technique based on a microcomputer using floppy disc Random Access Files is described. This interactive computational technique is efficient if the reliability prediction program is coupled directly to a relevant source of data to create an integrated reliability assessment/reliability data bank system. (DG)

  4. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  5. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  6. Material and structural mechanical modelling and reliability of thin-walled bellows at cryogenic temperatures. Application to LHC compensation system

    CERN Document Server

    Garion, Cédric; Skoczen, Blazej

    The present thesis is dedicated to the behaviour of austenitic stainless steels at cryogenic temperatures. The plastic strain induced martensitic transformation and ductile damage are taken into account in an elastic-plastic material modelling. The kinetic law of →’ transformation and the evolution laws of kinematic/isotropic mixed hardening are established. Damage issue is analysed by different ways: mesoscopic isotropic or orthotropic model and a microscopic approach. The material parameters are measured from 316L fine gauge sheet at three levels of temperature: 293 K, 77 K and 4.2 K. The model is applied to thin-walled corrugated shell, used in the LHC interconnections. The influence of the material properties on the stability is studied by a modal analysis. The reliability of the components, defined by the Weibull distribution law, is analysed from fatigue tests. The impact on reliability of geometrical imperfections and thermo-mechanical loads is also analysed.

  7. A method and application study on holistic decision tree for human reliability analysis in nuclear power plant

    International Nuclear Information System (INIS)

    Sun Feng; Zhong Shan; Wu Zhiyu

    2008-01-01

    The paper introduces a human reliability analysis method mainly used in Nuclear Power Plant Safety Assessment and the Holistic Decision Tree (HDT) method and how to apply it. The focus is primarily on providing the basic framework and some background of HDT method and steps to perform it. Influence factors and quality descriptors are formed by the interview with operators in Qinshan Nuclear Power Plant and HDT analysis performed for SGTR and SLOCA based on this information. The HDT model can use a graphic tree structure to indicate that error rate is a function of influence factors. HDT method is capable of dealing with the uncertainty in HRA, and it is reliable and practical. (authors)

  8. Strength and Reliability of Wood for the Components of Low-cost Wind Turbines: Computational and Experimental Analysis and Applications

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan

    2009-01-01

    of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...... are developed, which should provide the basis for microstructure-based correlating of observable and service properties of wood. Some correlations between microstructure, strength and service properties of wood have been established....

  9. Reliability of IGBT-based power devices in the viewpoint of applications in future power supply systems

    International Nuclear Information System (INIS)

    Lutz, J.

    2011-01-01

    IGBT-based high-voltage power devices will be key components for future renewable energy base of the society. Windmills in the range up to 10 MW use converters with IGBTs. HVDC systems with IGBT-based voltage source converters have the advantage of a lower level of harmonics, less efforts for filters and more possibilities for control. The power devices need a lifetime expectation of several ten years. The lifetime is determined by the reliability of the packaging technology. IGBTs are offered packaged in presspacks and modules. The presentation will have the focus on IGBT high power modules. Accelerated power cycling tests for to determine the end-of-life at given conditions and their results are shown. models to calculate the lifetime, and actual work in research for systems with increased reliability.

  10. A study of digital hardware architectures for nuclear reactors protection systems applications - reliability and safety analysis methods

    International Nuclear Information System (INIS)

    Benko, Pedro Luiz

    1997-01-01

    A study of digital hardware architectures, including experience in many countries, topologies and solutions to interface circuits for protection systems of nuclear reactors is presented. Methods for developing digital systems architectures based on fault tolerant and safety requirements is proposed. Directives for assessing such conditions are suggested. Techniques and the most common tools employed in reliability, safety evaluation and modeling of hardware architectures is also presented. Markov chain modeling is used to evaluate the reliability of redundant architectures. In order to estimate software quality, several mechanisms to be used in design, specification, and validation and verification (V and V) procedures are suggested. A digital protection system architecture has been analyzed as a case study. (author)

  11. High-reliability logic system evaluation of a programmed multiprocessor solution. Application in the nuclear reactor safety field

    International Nuclear Information System (INIS)

    Lallement, Dominique.

    1979-01-01

    Nuclear reactors are monitored by several systems combined. The hydraulic and mechanical limitations on the equipment and the heat transfer requirements in the core set a reliable working range for the boiler defined with certain safety margins. The control system tends to keep the power plant within this working range. The protection system covers all the electrical and mechanical equipment needed to safeguard the boiler in the event of abnormal transients or accidents accounted for in the design of the plant. On units in service protection is handled by cabled automatic systems. For better reliability and safety operation, greater flexibility of use (modularity, adaptability) and improved start-up criteria by data processing the tendency is to use digital programmed systems. Computers are already present in control systems but their introduction into protection systems meets with some reticence on the part of the nuclear safety authorities. A study on the replacement of conventional by digital protection systems is presented. From choices partly made on the principles which should govern the hardware and software of a protection system the reliability of different structures and elements was examined and an experimental model built with its simulator and test facilities. A prototype based on these options and studies is being built and is to be set up on one of the CEN-G reactors for tests [fr

  12. Reliability Assessment of Solder Joints in Power Electronic Modules by Crack Damage Model for Wind Turbine Applications

    Directory of Open Access Journals (Sweden)

    John D. Sørensen

    2011-12-01

    Full Text Available Wind turbine reliability is an important issue for wind energy cost minimization, especially by reduction of operation and maintenance costs for critical components and by increasing wind turbine availability. To develop an optimal operation and maintenance plan for critical components, it is necessary to understand the physics of their failure and be able to develop reliability prediction models. Such a model is proposed in this paper for an IGBT power electronic module. IGBTs are critical components in wind turbine converter systems. These are multilayered devices where layers are soldered to each other and they operate at a thermal-power cycling environment. Temperature loadings affect the reliability of soldered joints by developing cracks and fatigue processes that eventually result in failure. Based on Miner’s rule a linear damage model that incorporates a crack development and propagation processes is discussed. A statistical analysis is performed for appropriate model parameter selection. Based on the proposed model, a layout for component life prediction with crack movement is described in details.

  13. Highly-reliable operation of 638-nm broad stripe laser diode with high wall-plug efficiency for display applications

    Science.gov (United States)

    Yagi, Tetsuya; Shimada, Naoyuki; Nishida, Takehiro; Mitsuyama, Hiroshi; Miyashita, Motoharu

    2013-03-01

    Laser based displays, as pico to cinema laser projectors have gathered much attention because of wide gamut, low power consumption, and so on. Laser light sources for the displays are operated mainly in CW, and heat management is one of the big issues. Therefore, highly efficient operation is necessitated. Also the light sources for the displays are requested to be highly reliable. 638 nm broad stripe laser diode (LD) was newly developed for high efficiency and highly reliable operation. An AlGaInP/GaAs red LD suffers from low wall plug efficiency (WPE) due to electron overflow from an active layer to a p-cladding layer. Large optical confinement factor (Γ) design with AlInP cladding layers is adopted to improve the WPE. The design has a disadvantage for reliable operation because the large Γ causes high optical density and brings a catastrophic optical degradation (COD) at a front facet. To overcome the disadvantage, a window-mirror structure is also adopted in the LD. The LD shows WPE of 35% at 25°C, highest record in the world, and highly stable operation at 35°C, 550 mW up to 8,000 hours without any catastrophic optical degradation.

  14. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  15. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  16. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  17. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  18. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  19. Reliability analysis techniques in power plant design

    International Nuclear Information System (INIS)

    Chang, N.E.

    1981-01-01

    An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)

  20. Inter- and intrarater reliability of two proprioception tests using clinical applicable measurement tools in subjects with and without knee osteoarthritis.

    Science.gov (United States)

    Baert, Isabel A C; Lluch, Enrique; Struyf, Thomas; Peeters, Greta; Van Oosterwijck, Sophie; Tuynman, Joanna; Rufai, Salim; Struyf, Filip

    2018-06-01

    The therapeutic value of proprioceptive-based exercises in knee osteoarthritis (KOA) management warrants investigation of proprioceptive testing methods easily accessible in clinical practice. To estimate inter- and intrarater reliability of the knee joint position sense (KJPS) test and knee force sense (KFS) test in subjects with and without KOA. Cross-sectional test-retest design. Two blinded raters performed independently repeated measures of the KJPS and KFS test, using an analogue inclinometer and handheld dynamometer, respectively, in eight KOA patients (12 symptomatic knees) and 26 healthy controls (52 asymptomatic knees). Intraclass correlation coefficients (ICCs; model 2,1), standard error of measurement (SEM) and minimal detectable change with 95% confidence bounds (MDC 95 ) were calculated. For KJPS, results showed good to excellent test-retest agreement (ICCs 0.70-0.95 in KOA patients; ICCs 0.65-0.85 in healthy controls). A 2° measurement error (SEM 1°) was reported when measuring KJPS in multiple test positions and calculating mean repositioning error. Testing KOA patients pre and post therapy a repositioning error larger than 4° (MDC 95 ) is needed to consider true change. Measuring KFS using handheld dynamometry showed poor to fair interrater and poor to excellent intrarater reliability in subjects with and without KOA. Measuring KJPS in multiple test positions using an analogue inclinometer and calculating mean repositioning error is reliable and can be used in clinical practice. We do not recommend the use of the KFS test to clinicians. Further research is required to establish diagnostic accuracy and validity of our KJPS test in larger knee pain populations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Reliable Detection and Smart Deletion of Malassez Counting Chamber Grid in Microscopic White Light Images for Microbiological Applications.

    Science.gov (United States)

    Denimal, Emmanuel; Marin, Ambroise; Guyot, Stéphane; Journaux, Ludovic; Molin, Paul

    2015-08-01

    In biology, hemocytometers such as Malassez slides are widely used and are effective tools for counting cells manually. In a previous work, a robust algorithm was developed for grid extraction in Malassez slide images. This algorithm was evaluated on a set of 135 images and grids were accurately detected in most cases, but there remained failures for the most difficult images. In this work, we present an optimization of this algorithm that allows for 100% grid detection and a 25% improvement in grid positioning accuracy. These improvements make the algorithm fully reliable for grid detection. This optimization also allows complete erasing of the grid without altering the cells, which eases their segmentation.

  2. Human reliability

    International Nuclear Information System (INIS)

    Bubb, H.

    1992-01-01

    This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de

  3. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  4. A Type-2 fuzzy data fusion approach for building reliable weighted protein interaction networks with application in protein complex detection.

    Science.gov (United States)

    Mehranfar, Adele; Ghadiri, Nasser; Kouhsar, Morteza; Golshani, Ashkan

    2017-09-01

    Detecting the protein complexes is an important task in analyzing the protein interaction networks. Although many algorithms predict protein complexes in different ways, surveys on the interaction networks indicate that about 50% of detected interactions are false positives. Consequently, the accuracy of existing methods needs to be improved. In this paper we propose a novel algorithm to detect the protein complexes in 'noisy' protein interaction data. First, we integrate several biological data sources to determine the reliability of each interaction and determine more accurate weights for the interactions. A data fusion component is used for this step, based on the interval type-2 fuzzy voter that provides an efficient combination of the information sources. This fusion component detects the errors and diminishes their effect on the detection protein complexes. So in the first step, the reliability scores have been assigned for every interaction in the network. In the second step, we have proposed a general protein complex detection algorithm by exploiting and adopting the strong points of other algorithms and existing hypotheses regarding real complexes. Finally, the proposed method has been applied for the yeast interaction datasets for predicting the interactions. The results show that our framework has a better performance regarding precision and F-measure than the existing approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  6. Combining Generalized Renewal Processes with Non-Extensive Entropy-Based q-Distributions for Reliability Applications

    Directory of Open Access Journals (Sweden)

    Isis Didier Lins

    2018-03-01

    Full Text Available The Generalized Renewal Process (GRP is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.

  7. Reproducibility and interoperator reliability of obtaining images and measurements of the cervix and uterus with brachytherapy treatment applicators in situ using transabdominal ultrasound.

    Science.gov (United States)

    van Dyk, Sylvia; Garth, Margaret; Oates, Amanda; Kondalsamy-Chennakesavan, Srinivas; Schneider, Michal; Bernshaw, David; Narayan, Kailash

    2016-01-01

    To validate interoperator reliability of brachytherapy radiation therapists (RTs) in obtaining an ultrasound image and measuring the cervix and uterine dimensions using transabdominal ultrasound. Patients who underwent MRI with applicators in situ after the first insertion were included in the study. Imaging was performed by three RTs (RT1, RT2, and RT3) with varying degrees of ultrasound experience. All RTs were required to obtain a longitudinal planning image depicting the applicator in the uterine canal and measure the cervix and uterus. The MRI scan, taken 1 hour after the ultrasound, was used as the reference standard against which all measurements were compared. Measurements were analyzed with intraclass correlation coefficient and Bland-Altman plots. All RTs were able to obtain a suitable longitudinal image for each patient in the study. Mean differences (SD) between MRI and ultrasound measurements obtained by RTs ranged from 3.5 (3.6) to 4.4 (4.23) mm and 0 (3.0) to 0.9 (2.5) mm on the anterior and posterior surface of the cervix, respectively. Intraclass correlation coefficient for absolute agreement between MRI and RTs was >0.9 for all posterior measurement points in the cervix and ranged from 0.41 to 0.92 on the anterior surface. Measurements were not statistically different between RTs at any measurement point. RTs with variable training attained high levels of interoperator reliability when using transabdominal ultrasound to obtain images and measurements of the uterus and cervix with brachytherapy applicators in situ. Access to training and use of a well-defined protocol assist in achieving these high levels of reliability. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  8. Development of a diagnostic test set to assess agreement in breast pathology: practical application of the Guidelines for Reporting Reliability and Agreement Studies (GRRAS).

    Science.gov (United States)

    Oster, Natalia V; Carney, Patricia A; Allison, Kimberly H; Weaver, Donald L; Reisch, Lisa M; Longton, Gary; Onega, Tracy; Pepe, Margaret; Geller, Berta M; Nelson, Heidi D; Ross, Tyler R; Tosteson, Aanna N A; Elmore, Joann G

    2013-02-05

    Diagnostic test sets are a valuable research tool that contributes importantly to the validity and reliability of studies that assess agreement in breast pathology. In order to fully understand the strengths and weaknesses of any agreement and reliability study, however, the methods should be fully reported. In this paper we provide a step-by-step description of the methods used to create four complex test sets for a study of diagnostic agreement among pathologists interpreting breast biopsy specimens. We use the newly developed Guidelines for Reporting Reliability and Agreement Studies (GRRAS) as a basis to report these methods. Breast tissue biopsies were selected from the National Cancer Institute-funded Breast Cancer Surveillance Consortium sites. We used a random sampling stratified according to woman's age (40-49 vs. ≥50), parenchymal breast density (low vs. high) and interpretation of the original pathologist. A 3-member panel of expert breast pathologists first independently interpreted each case using five primary diagnostic categories (non-proliferative changes, proliferative changes without atypia, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma). When the experts did not unanimously agree on a case diagnosis a modified Delphi method was used to determine the reference standard consensus diagnosis. The final test cases were stratified and randomly assigned into one of four unique test sets. We found GRRAS recommendations to be very useful in reporting diagnostic test set development and recommend inclusion of two additional criteria: 1) characterizing the study population and 2) describing the methods for reference diagnosis, when applicable.

  9. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  10. On the application of nonhomogeneous Poisson process to the reliability analysis of service water pumps of nuclear power plants

    International Nuclear Information System (INIS)

    Cruz Saldanha, Pedro Luiz da.

    1995-12-01

    The purpose of this study is to evaluate the nonhomogeneous Poisson process as a model to rate of occurrence of failures when it is not constant, and the times between failures are not independent nor identically distributed. To this evaluation, an analyse of reliability of service water pumps of a typical nuclear power plant is made considering the model discussed in the last paragraph, as long as the pumps are effectively repairable components. Standard statistical techniques, such as maximum likelihood and linear regression, are applied to estimate parameters of nonhomogeneous Poisson process model. As a conclusion of the study, the nonhomogeneous Poisson process is adequate to model rate of occurrence of failures that are function of time, and can be used where the aging mechanisms are present in operation of repairable systems. (author). 72 refs., 45 figs., 21 tabs

  11. Application to nuclear turbines of high-efficiency and reliable 3D-designed integral shrouded blades

    International Nuclear Information System (INIS)

    Watanabe, Eiichiro; Ohyama, Hiroharu; Tashiro, Hikaru; Sugitani, Toshio; Kurosawa, Masaru

    1999-01-01

    Mitsubishi Heavy Industries, Ltd. (MHI) has recently developed new blades for nuclear turbines, in order to achieve higher efficiency and higher reliability. The three-dimensional aerodynamic design for 41-inch and 46-inch blades, their one piece structural design (integral shrouded blades: ISB), and the verification test results using a model steam turbine are described in this paper. The predicted efficiency and lower vibratory stress have been verified. On the basis of these 60 Hz ISB, 50 Hz ISB series are under development using 'the law of similarity' without changing their thermodynamic performance and mechanical stress levels. Our 3D-designed reaction blades which are used for the high pressure and low pressure upstream stages, are also briefly mentioned. (author)

  12. Applications of a fracture mechanics model of structural reliability to the effects of seismic events on reactor piping

    International Nuclear Information System (INIS)

    Harris, D.O.; Lim, E.Y.

    1982-01-01

    A fracture mechanics model of structural reliability is described. The model assumes that failure occurs due to the subcritical and catastrophic growth of as-fabricated defects. The material properties, stress history, number and dimensions of the initial cracks are treated as random variables. Crack growth is calculated using fracture mechanics principles. The model has been used to estimate the influence of earthquakes on the integrity of circumferential girth butt welds in the large (diameter greater than 30 in.) primary coolant system pipes of a commercial pressurized water reactor. In the absence of earthquakes, the probability of leaks and catastrophic double-ended guillotine breaks is estimated to be 10 -6 and 10 -12 per plant lifetime, respectively. These probabilities were only slightly increased by the occurrence of earthquakes. (author)

  13. Reliability of nine programs of topological predictions and their application to integral membrane channel and carrier proteins.

    Science.gov (United States)

    Reddy, Abhinay; Cho, Jaehoon; Ling, Sam; Reddy, Vamsee; Shlykov, Maksim; Saier, Milton H

    2014-01-01

    We evaluated topological predictions for nine different programs, HMMTOP, TMHMM, SVMTOP, DAS, SOSUI, TOPCONS, PHOBIUS, MEMSAT-SVM (hereinafter referred to as MEMSAT), and SPOCTOPUS. These programs were first evaluated using four large topologically well-defined families of secondary transporters, and the three best programs were further evaluated using topologically more diverse families of channels and carriers. In the initial studies, the order of accuracy was: SPOCTOPUS > MEMSAT > HMMTOP > TOPCONS > PHOBIUS > TMHMM > SVMTOP > DAS > SOSUI. Some families, such as the Sugar Porter Family (2.A.1.1) of the Major Facilitator Superfamily (MFS; TC #2.A.1) and the Amino Acid/Polyamine/Organocation (APC) Family (TC #2.A.3), were correctly predicted with high accuracy while others, such as the Mitochondrial Carrier (MC) (TC #2.A.29) and the K(+) transporter (Trk) families (TC #2.A.38), were predicted with much lower accuracy. For small, topologically homogeneous families, SPOCTOPUS and MEMSAT were generally most reliable, while with large, more diverse superfamilies, HMMTOP often proved to have the greatest prediction accuracy. We next developed a novel program, TM-STATS, that tabulates HMMTOP, SPOCTOPUS or MEMSAT-based topological predictions for any subdivision (class, subclass, superfamily, family, subfamily, or any combination of these) of the Transporter Classification Database (TCDB; www.tcdb.org) and examined the following subclasses: α-type channel proteins (TC subclasses 1.A and 1.E), secreted pore-forming toxins (TC subclass 1.C) and secondary carriers (subclass 2.A). Histograms were generated for each of these subclasses, and the results were analyzed according to subclass, family and protein. The results provide an update of topological predictions for integral membrane transport proteins as well as guides for the development of more reliable topological prediction programs, taking family-specific characteristics into account. © 2014 S. Karger AG, Basel.

  14. Approach to reliability assessment

    International Nuclear Information System (INIS)

    Green, A.E.; Bourne, A.J.

    1975-01-01

    Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions

  15. Medical application and clinical validation for reliable and trustworthy physiological monitoring using functional textiles: experience from the HeartCycle and MyHeart project.

    Science.gov (United States)

    Reiter, Harald; Muehlsteff, Jens; Sipilä, Auli

    2011-01-01

    Functional textiles are seen as promising technology to enable healthcare services and medical care outside hospitals due to their ability to integrate textile-based sensing and monitoring technologies into the daily life. In the past much effort has been spent onto basic functional textile research already showing that reliable monitoring solutions can be realized. The challenge remains to find and develop suited medical application and to fulfil the boundary conditions for medical endorsement and exploitation. The HeartCycle vest described in this abstract will serve as an example for a functional textile carefully developed according to the requirements of a specific medical application, its clinical validation, the related certification aspects and the next improvement steps towards exploitation.

  16. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  17. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  18. Validation of a standard forensic anthropology examination protocol by measurement of applicability and reliability on exhumed and archive samples of known biological attribution.

    Science.gov (United States)

    Francisco, Raffaela Arrabaça; Evison, Martin Paul; Costa Junior, Moacyr Lobo da; Silveira, Teresa Cristina Pantozzi; Secchieri, José Marcelo; Guimarães, Marco Aurelio

    2017-10-01

    Forensic anthropology makes an important contribution to human identification and assessment of the causes and mechanisms of death and body disposal in criminal and civil investigations, including those related to atrocity, disaster and trafficking victim identification. The methods used are comparative, relying on assignment of questioned material to categories observed in standard reference material of known attribution. Reference collections typically originate in Europe and North America, and are not necessarily representative of contemporary global populations. Methods based on them must be validated when applied to novel populations. This study describes the validation of a standardized forensic anthropology examination protocol by application to two contemporary Brazilian skeletal samples of known attribution. One sample (n=90) was collected from exhumations following 7-35 years of burial and the second (n=30) was collected following successful investigations following routine case work. The study presents measurement of (1) the applicability of each of the methods: used and (2) the reliability with which the biographic parameters were assigned in each case. The results are discussed with reference to published assessments of methodological reliability regarding sex, age and-in particular-ancestry estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  20. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications Their Use in Reliability and DNA Analysis

    CERN Document Server

    Barbu, Vlad

    2008-01-01

    Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes

  1. Human reliability analysis—Taxonomy and praxes of human entropy boundary conditions for marine and offshore applications

    International Nuclear Information System (INIS)

    El-Ladan, S.B.; Turan, O.

    2012-01-01

    This is the first stage towards the development of a human reliability model called human entropy (HENT). The paper presents qualitative and quantitative taxonomies and praxes of performance shaping factors (PSF) for Marine and Offshore operations. Three structured and guided expert elicitation methods were used in this study. The experts interrogated accident reports and databases from which the generic root causes of failures/accidents in operations are determined. The elicitations led to the development of 9 qualitative and quantitative human influencing factors, which are called Human Entropy Boundary Conditions (HEBC). Further explications of the 9 HEBC gave birth to 137 quantifiable explanatory variables, which are called hypothetical constructs (HyC). The HyCs are used to identify potential risks due to shrinkages in safety standards. Human entropy is a detour from traditional human error and was used as a result of tripartite human failure modes; error, local rationality and extraneous acts, all of which signify disorderliness and are seemingly inevitable in maritime operations. The praxes and scaling of HEBC was developed as guidance towards a practical oriented HRA and provide inputs for measuring human disorderliness in maritime operations.

  2. Evaluation and Reliability Assessment of GaN-on-Si MIS-HEMT for Power Switching Applications

    Directory of Open Access Journals (Sweden)

    Po-Chien Chou

    2017-02-01

    Full Text Available This paper reports an extensive analysis of the physical mechanisms responsible for the failure of GaN-based metal–insulator–semiconductor (MIS high electron mobility transistors (HEMTs. When stressed under high applied electric fields, the traps at the dielectric/III-N barrier interface and inside the III-N barrier cause an increase in dynamic on-resistance and a shift of threshold voltage, which might affect the long term stability of these devices. More detailed investigations are needed to identify epitaxy- or process-related degradation mechanisms and to understand their impact on electrical properties. The present paper proposes a suitable methodology to characterize the degradation and failure mechanisms of GaN MIS-HEMTs subjected to stress under various off-state conditions. There are three major stress conditions that include: VDS = 0 V, off, and off (cascode-connection states. Changes of direct current (DC figures of merit in voltage step-stress experiments are measured, statistics are studied, and correlations are investigated. Hot electron stress produces permanent change which can be attributed to charge trapping phenomena and the generation of deep levels or interface states. The simultaneous generation of interface (and/or bulk and buffer traps can account for the observed degradation modes and mechanisms. These findings provide several critical characteristics to evaluate the electrical reliability of GaN MIS-HEMTs which are borne out by step-stress experiments.

  3. Driver behavior in car-to-pedestrian incidents: An application of the Driving Reliability and Error Analysis Method (DREAM).

    Science.gov (United States)

    Habibovic, Azra; Tivesten, Emma; Uchida, Nobuyuki; Bärgman, Jonas; Ljung Aust, Mikael

    2013-01-01

    To develop relevant road safety countermeasures, it is necessary to first obtain an in-depth understanding of how and why safety-critical situations such as incidents, near-crashes, and crashes occur. Video-recordings from naturalistic driving studies provide detailed information on events and circumstances prior to such situations that is difficult to obtain from traditional crash investigations, at least when it comes to the observable driver behavior. This study analyzed causation in 90 video-recordings of car-to-pedestrian incidents captured by onboard cameras in a naturalistic driving study in Japan. The Driving Reliability and Error Analysis Method (DREAM) was modified and used to identify contributing factors and causation patterns in these incidents. Two main causation patterns were found. In intersections, drivers failed to recognize the presence of the conflict pedestrian due to visual obstructions and/or because their attention was allocated towards something other than the conflict pedestrian. In incidents away from intersections, this pattern reoccurred along with another pattern showing that pedestrians often behaved in unexpected ways. These patterns indicate that an interactive advanced driver assistance system (ADAS) able to redirect the driver's attention could have averted many of the intersection incidents, while autonomous systems may be needed away from intersections. Cooperative ADAS may be needed to address issues raised by visual obstructions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Application of reliability-centered maintenance to boiling water reactor emergency core cooling systems fault-tree analysis

    International Nuclear Information System (INIS)

    Choi, Y.A.; Feltus, M.A.

    1995-01-01

    Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specific MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company's Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates

  5. Application of reliability techniques to prioritize BWR [boiling water reactor] recirculation loop welds for in-service inspection

    International Nuclear Information System (INIS)

    Holman, G.S.

    1989-12-01

    In January 1988 the US Nuclear Regulatory Commission issued Generic Letter 88-01 together with NUREG-0313, Revision 2, ''Technical Report on Material Selection and Processing Guidelines for BWR Coolant Pressure Boundary Piping,'' to implement NRC long-range plans for addressing the problem of stress corrosion cracking in boiling water reactor piping. NUREG-0313 presents guidelines for categorizing BWR pipe welds according to their SCC condition (e.g., presence of known cracks, implementation of measures for mitigating SCC) as well as recommended inspection schedules (e.g., percentage of welds inspected, inspection frequency) for each weld category. NUREG-0313 does not, however, specify individual welds to be inspected. To address this issue, the Lawrence Livermore National Laboratory developed two recommended inspection samples for welds in a typical BWR recirculation loop. Using a probabilistic fracture mechanics model, LLNL prioritized loop welds on the basis of estimated leak probabilities. The results of this evaluation indicate that riser welds and bypass welds should be given priority attention over other welds. Larger-diameter welds as a group can be considered of secondary importance compared to riser and bypass welds. A ''blind'' comparison between the probability-based inspection samples and data from actual field inspections indicated that the probabilistic analysis generally captured the welds which the field inspections identified as warranting repair or replacement. Discrepancies between the field data and the analytic results can likely be attributed to simplifying assumptions made in the analysis. The overall agreement between analysis and field experience suggests that reliability techniques -- when combined with historical experience -- represent a sound technical basis on which to define meaningful weld inspection programs. 13 refs., 8 figs., 5 tabs

  6. Response surface methodology approach for structural reliability analysis: An outline of typical applications performed at CEC-JRC, Ispra

    International Nuclear Information System (INIS)

    Lucia, A.C.

    1982-01-01

    The paper presents the main results of the work carried out at JRC-Ispra for the study of specific problems posed by the application of the response surface methodology to the exploration of structural and nuclear reactor safety codes. Some relevant studies have been achieved: assessment of structure behaviours in the case of seismic occurrences; determination of the probability of coherent blockage in LWR fuel elements due to LOCA occurrence; analysis of ATWS consequences in PWR reactors by means of an ALMOD code; analysis of the first wall for an experimental fusion reactor by means of the Bersafe code. (orig.)

  7. A practical approach for calculating reliable cost estimates from observational data: application to cost analyses in maternal and child health.

    Science.gov (United States)

    Salemi, Jason L; Comins, Meg M; Chandler, Kristen; Mogos, Mulubrhan F; Salihu, Hamisu M

    2013-08-01

    Comparative effectiveness research (CER) and cost-effectiveness analysis are valuable tools for informing health policy and clinical care decisions. Despite the increased availability of rich observational databases with economic measures, few researchers have the skills needed to conduct valid and reliable cost analyses for CER. The objectives of this paper are to (i) describe a practical approach for calculating cost estimates from hospital charges in discharge data using publicly available hospital cost reports, and (ii) assess the impact of using different methods for cost estimation in maternal and child health (MCH) studies by conducting economic analyses on gestational diabetes (GDM) and pre-pregnancy overweight/obesity. In Florida, we have constructed a clinically enhanced, longitudinal, encounter-level MCH database covering over 2.3 million infants (and their mothers) born alive from 1998 to 2009. Using this as a template, we describe a detailed methodology to use publicly available data to calculate hospital-wide and department-specific cost-to-charge ratios (CCRs), link them to the master database, and convert reported hospital charges to refined cost estimates. We then conduct an economic analysis as a case study on women by GDM and pre-pregnancy body mass index (BMI) status to compare the impact of using different methods on cost estimation. Over 60 % of inpatient charges for birth hospitalizations came from the nursery/labor/delivery units, which have very different cost-to-charge markups (CCR = 0.70) than the commonly substituted hospital average (CCR = 0.29). Using estimated mean, per-person maternal hospitalization costs for women with GDM as an example, unadjusted charges ($US14,696) grossly overestimated actual cost, compared with hospital-wide ($US3,498) and department-level ($US4,986) CCR adjustments. However, the refined cost estimation method, although more accurate, did not alter our conclusions that infant/maternal hospitalization costs

  8. Effect of brief training on reliability and applicability of Global Assessment of functioning scale by Psychiatric clinical officers in Uganda.

    Science.gov (United States)

    Abbo, C; Okello, E S; Nakku, J

    2013-03-01

    The Global Assessment of Functioning (GAF) is the standard method and an essential tool for representing a clinician's judgment of a patient's overall level of psychological, social and occupational functioning. As such, it is probably the single most widely used method for assessing impairment among the patients with psychiatric illnesses. To assess the effects of one-hour training on application of the GAF by Psychiatric Clinical Officers' in a Ugandan setting. Five Psychiatrists and five Psychiatric Clinical Officers (PCOs) or Assistant Medical Officers who hold a 2 year diploma in Clinical Psychiatry were randomly selected to independently rate a video-recorded psychiatric interview according to the DSM IV-TR. The PCOs were then offered a one-hour training on how to rate the GAF scale and asked to rate the video case interview again. All ratings were assigned on the basis of past one year, at admission and current functioning. Interclass correlations (ICC) were computed using two-way mixed models. The ICC between the psychiatrists and the PCOs before training in the past one year, at admission and current functioning were +0.48, +0.51 and +0.59 respectively. After training, the ICC coefficients were +0.60, +0.82 and +0.83. Brief training given to PCOs improved the applications of their ratings of GAF scale to acceptable levels. There is need for formal training to this cadre of psychiatric practitioners in the use of the GAF.

  9. Reliability in mechanics: the application of experience feedback; La fiabilite en mecanique: mise en pratique du retour d`experience

    Energy Technology Data Exchange (ETDEWEB)

    Coudray, R.

    1994-12-31

    After a short overview of the available methods for statistical multi-dimensional studies, an application of these methods is described using the experience feedback of French nuclear reactors. The material studied is the RCV (chemical and volumetric control system) pump of the 900 MW PWR type reactors for which data used in the study are explained. The aim of the study is to show the pertinency of the rate of failures as an indicator of the material aging. This aging is illustrated by the most significant characteristics with an indication of their significance level. The method used combines the results from a mixed classification and those from a multiple correspondences analysis in several steps or evolutions. (J.S.). 8 refs., 6 figs., 3 tabs.

  10. Simple, Rapid and Reliable Preparation of [11C]-(+-a-DTBZ of High Quality for Routine Applications

    Directory of Open Access Journals (Sweden)

    Jiahe Tian

    2012-06-01

    Full Text Available [11C]-(+-a-DTBZ has been used as a marker of dopaminergic terminal densities in human striatum and expressed in islet beta cells in the pancreas. We aimed to establish a fully automated and simple procedure for the synthesis of [11C]-(+-a-DTBZ for routine applications. [11C]-(+-a-DTBZ was synthesized from a 9-hydroxy precursor in acetone and potassium hydroxide with [11C]-methyl triflate and was purified by solid phase extraction using a Vac tC-18 cartridge. Radiochemical yields based on [11C]-methyl triflate (corrected for decay were 82.3% ± 3.6%, with a specific radioactivity of 60 GBq/mmol. Time elapsed was less than 20 min from end of bombardment to release of the product for quality control.

  11. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    Science.gov (United States)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including

  12. Development and application of a cost-benefit framework for energy reliability. Using probabilistic methods in network planning and regulation to enhance social welfare. The N-1 rule

    International Nuclear Information System (INIS)

    Nooij, Michiel de; Baarsma, Barbara; Bloemhof, Gabriel; Dijk, Harold; Slootweg, Han

    2010-01-01

    Although electricity is crucial to many activities in developed societies, guaranteeing a maximum reliability of supply to end-users is extremely costly. This situation gives rise to a trade-off between the costs and benefits of reliability. The Dutch government has responded to this trade-off by changing the rule stipulating that electricity networks must be able to maintain supply even if one component fails (known as the N-1 rule), even in maintenance situations. This rule was changed by adding the phrase 'unless the costs exceed the benefits.' We have developed a cost-benefit framework for the implementation and application of this new rule. The framework requires input on failure probability, the cost of supply interruptions to end-users and the cost of investments. A case study of the Dutch grid shows that the method is indeed practicable and that it is highly unlikely that N-1 during maintenance will enhance welfare in the Netherlands. Therefore, including the limitation 'unless the costs exceed the benefits' in the rule has been a sensible policy for the Netherlands, and would also be a sensible policy for other countries. (author)

  13. Development of improved processing and evaluation methods for high reliability structural ceramics for advanced heat engine applications Phase II. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Pujari, V.J.; Tracey, D.M.; Foley, M.R. [and others

    1996-02-01

    The research program had as goals the development and demonstration of significant improvements in processing methods, process controls, and nondestructive evaluation (NDE) which can be commercially implemented to produce high reliability silicon nitride components for advanced heat engine applications at temperatures to 1370{degrees}C. In Phase I of the program a process was developed that resulted in a silicon nitride - 4 w% yttria HIP`ed material (NCX 5102) that displayed unprecedented strength and reliability. An average tensile strength of 1 GPa and a strength distribution following a 3-parameter Weibull distribution were demonstrated by testing several hundred buttonhead tensile specimens. The Phase II program focused on the development of methodology for colloidal consolidation producing green microstructure which minimizes downstream process problems such as drying, shrinkage, cracking, and part distortion during densification. Furthermore, the program focused on the extension of the process to gas pressure sinterable (GPS) compositions. Excellent results were obtained for the HIP composition processed for minimal density gradients, both with respect to room-temperature strength and high-temperature creep resistance. Complex component fabricability of this material was demonstrated by producing engine-vane prototypes. Strength data for the GPS material (NCX-5400) suggest that it ranks very high relative to other silicon nitride materials in terms of tensile/flexure strength ratio, a measure of volume quality. This high quality was derived from the closed-loop colloidal process employed in the program.

  14. Mathematical reliability an expository perspective

    CERN Document Server

    Mazzuchi, Thomas; Singpurwalla, Nozer

    2004-01-01

    In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival...

  15. Medical device reliability and associated areas

    National Research Council Canada - National Science Library

    Dhillon, Balbir S

    2000-01-01

    .... Although the history of reliability engineering can be traced back to World War II, the application of reliability engineering concepts to medical devices is a fairly recent idea that goes back to the latter part of the 1960s when many publications on medical device reliability emerged. Today, a large number of books on general reliability have been...

  16. SU-F-T-433: Evaluation of a New Dose Mimicking Application for Clinical Flexibility and Reliability

    International Nuclear Information System (INIS)

    Hoffman, D; Nair, C Kumaran; Wright, C; Yamamoto, T; Mayadev, J; Valicenti, R; Benedict, S; Rong, Y; Markham, J

    2016-01-01

    Purpose: Clinical workflow and machine down time occasionally require patients to be temporarily treated on a system other than the initial treatment machine. A new commercial dose mimicking application provides automated cross-platform treatment planning to expedite this clinical flexibility. The aim of this work is to evaluate the feasibility of automatic plan creation and establish a robust clinical workflow for prostate and pelvis patients. Methods: Five prostate and five pelvis patients treated with helical plans were selected for re-planning with the dose mimicking application, covering both simple and complex scenarios. Two-arc VMAT and 7- and 9-field IMRT plans were generated for each case, with the objective function of achieving similar dose volume histogram from the initial helical plans. Dosimetric comparisons include target volumes and organs at risk (OARs) (rectum, bladder, small bowel, femoral heads, etc.). Dose mimicked plans were evaluated by a radiation oncologist, and patient-specific QAs were performed to validate delivery. Results: Overall plan generation and transfer required around 30 minutes of dosimetrist’s time once the dose-mimicking protocol is setup for each site. The resulting VMAT and 7- and 9-field IMRT plans achieved equivalent PTV coverage and homogeneity (D99/DRx = 97.3%, 97.2%, 97.2% and HI = 6.0, 5.8, and 5.9, respectively), compared to helical plans (97.6% and 4.6). The OAR dose discrepancies were up to 6% in rectum Dmean, but generally lower in bladder, femoral heads, bowel and penile bulb. In the context of 1–5 fractions, the radiation oncologist evaluated the dosimetric changes as not clinically significant. All delivery QAs achieved >90% pass with a 3%/3mm gamma criteria. Conclusion: The automated dose-mimicking workflow offers a strategy to avoid missing treatment fractions due to machine down time with non-clinically significant changes in dosimetry. Future work will further optimize dose mimicking plans and

  17. SU-F-T-433: Evaluation of a New Dose Mimicking Application for Clinical Flexibility and Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, D; Nair, C Kumaran; Wright, C; Yamamoto, T; Mayadev, J; Valicenti, R; Benedict, S; Rong, Y [University of California Davis Medical Center, Sacramento, CA (United States); Markham, J [Raysearch Laboratories, Garden City, NY (United States)

    2016-06-15

    Purpose: Clinical workflow and machine down time occasionally require patients to be temporarily treated on a system other than the initial treatment machine. A new commercial dose mimicking application provides automated cross-platform treatment planning to expedite this clinical flexibility. The aim of this work is to evaluate the feasibility of automatic plan creation and establish a robust clinical workflow for prostate and pelvis patients. Methods: Five prostate and five pelvis patients treated with helical plans were selected for re-planning with the dose mimicking application, covering both simple and complex scenarios. Two-arc VMAT and 7- and 9-field IMRT plans were generated for each case, with the objective function of achieving similar dose volume histogram from the initial helical plans. Dosimetric comparisons include target volumes and organs at risk (OARs) (rectum, bladder, small bowel, femoral heads, etc.). Dose mimicked plans were evaluated by a radiation oncologist, and patient-specific QAs were performed to validate delivery. Results: Overall plan generation and transfer required around 30 minutes of dosimetrist’s time once the dose-mimicking protocol is setup for each site. The resulting VMAT and 7- and 9-field IMRT plans achieved equivalent PTV coverage and homogeneity (D99/DRx = 97.3%, 97.2%, 97.2% and HI = 6.0, 5.8, and 5.9, respectively), compared to helical plans (97.6% and 4.6). The OAR dose discrepancies were up to 6% in rectum Dmean, but generally lower in bladder, femoral heads, bowel and penile bulb. In the context of 1–5 fractions, the radiation oncologist evaluated the dosimetric changes as not clinically significant. All delivery QAs achieved >90% pass with a 3%/3mm gamma criteria. Conclusion: The automated dose-mimicking workflow offers a strategy to avoid missing treatment fractions due to machine down time with non-clinically significant changes in dosimetry. Future work will further optimize dose mimicking plans and

  18. Acoustic feedwater heater leak detection: Industry application of low ampersand high frequency detection increases response and reliability

    International Nuclear Information System (INIS)

    Woyshner, W.S.; Bryson, T.; Robertson, M.O.

    1993-01-01

    The Electric Power Research Institute has sponsored research associated with acoustic Feedwater Heater Leak Detection since the early 1980s. Results indicate that this technology is economically beneficial and dependable. Recent research work has employed acoustic sensors and signal conditioning with wider frequency range response and background noise elimination techniques to provide increased accuracy and dependability. Dual frequency sensors have been applied at a few facilities to provide information on this application of dual frequency response. Sensor mounting methods and attenuation due to various mounting configurations are more conclusively understood. These are depicted and discussed in detail. The significance of trending certain plant parameters such as heat cycle flows, heater vent and drain valve position, proper relief valve operation, etc. is also addressed. Test data were collected at various facilities to monitor the effect of varying several related operational parameters. A group of FWHLD Users have been involved from the inception of the project and reports on their latest successes and failures, along with various data depicting early detection of FWHLD tube leaks, will be included. 3 refs., 12 figs., 1 tab

  19. A reliable and controllable graphene doping method compatible with current CMOS technology and the demonstration of its device applications

    Science.gov (United States)

    Kim, Seonyeong; Shin, Somyeong; Kim, Taekwang; Du, Hyewon; Song, Minho; Kim, Ki Soo; Cho, Seungmin; Lee, Sang Wook; Seo, Sunae

    2017-04-01

    The modulation of charge carrier concentration allows us to tune the Fermi level (E F) of graphene thanks to the low electronic density of states near the E F. The introduced metal oxide thin films as well as the modified transfer process can elaborately maneuver the amounts of charge carrier concentration in graphene. The self-encapsulation provides a solution to overcome the stability issues of metal oxide hole dopants. We have manipulated systematic graphene p-n junction structures for electronic or photonic application-compatible doping methods with current semiconducting process technology. We have demonstrated the anticipated transport properties on the designed heterojunction devices with non-destructive doping methods. This mitigates the device architecture limitation imposed in previously known doping methods. Furthermore, we employed E F-modulated graphene source/drain (S/D) electrodes in a low dimensional transition metal dichalcogenide field effect transistor (TMDFET). We have succeeded in fulfilling n-type, ambipolar, or p-type field effect transistors (FETs) by moving around only the graphene work function. Besides, the graphene/transition metal dichalcogenide (TMD) junction in either both p- and n-type transistor reveals linear voltage dependence with the enhanced contact resistance. We accomplished the complete conversion of p-/n-channel transistors with S/D tunable electrodes. The E F modulation using metal oxide facilitates graphene to access state-of-the-art complimentary-metal-oxide-semiconductor (CMOS) technology.

  20. Reliability engineering theory and practice

    CERN Document Server

    Birolini, Alessandro

    2010-01-01

    Presenting a solid overview of reliability engineering, this volume enables readers to build and evaluate the reliability of various components, equipment and systems. Current applications are presented, and the text itself is based on the author's 30 years of experience in the field.

  1. Reliable TLDA-microvolume UV spectroscopy with applications in chemistry and biosciences for microlitre analysis and rapid pipette calibration

    Science.gov (United States)

    McMillan, Norman; O'Neill, Martina; Smith, Stephen; Hammond, John; Riedel, Sven; Arthure, Kevin; Smith, S.

    2009-05-01

    A TLDA-microvolume (transmitted light drop analyser) accessory for use with a standard UV-visible fibre spectrophotometer is described. The physics of the elegantly simple optical design is described along with the experimental testing of this accessory. The modelling of the arrangement is fully explored to investigate the performance of the drop spectrophotometer. The design optimizes the focusing to deliver the highest quality spectra, rapid and simple sample handling and, importantly, no detectable carryover on the single quartz drophead. Results of spectral measurements in a laboratory providing NIST standards show the closest correlation between modelled pathlength and experimental measurement for different drop volumes in the range 0.7-3 µl. This instrument accessory delivers remarkably accurate and reproducible results that are good enough to allow the accessory to be used for rapid pipette calibration to avoid the laborious weighing methods currently employed. Measurements on DNA standards and proteins are given to illustrate the main application area of biochemistry for this accessory. The accessory has a measurement range of at least 0-60 A units without sample dilution and, since there exists an accurate volume-pathlength relationship, the drop volume used in any specific measurement or assay should be optimized to minimize the photometric error. Studies demonstrate that the cleaning of the drophead with lab wipes results in no measurable carryover. This important practical result is confirmed from direct reading of the accessory and an analytical balance which was used to perform carryover studies. For further information on the TLDA please contact: Drop Technology, Unit 2, Tallaght Business Park, Whitestown, Dublin 24, Republic of Ireland. email: info@droptechnology.com.

  2. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  3. Nonparametric predictive inference in reliability

    International Nuclear Information System (INIS)

    Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.

    2002-01-01

    We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere

  4. A point of application study to determine the accuracy, precision and reliability of a low-cost balance plate for center of pressure measurement.

    Science.gov (United States)

    Goble, Daniel J; Khan, Ehran; Baweja, Harsimran S; O'Connor, Shawn M

    2018-04-11

    Changes in postural sway measured via force plate center of pressure have been associated with many aspects of human motor ability. A previous study validated the accuracy and precision of a relatively new, low-cost and portable force plate called the Balance Tracking System (BTrackS). This work compared a laboratory-grade force plate versus BTrackS during human-like dynamic sway conditions generated by an inverted pendulum device. The present study sought to extend previous validation attempts for BTrackS using a more traditional point of application (POA) approach. Computer numerical control (CNC) guided application of ∼155 N of force was applied five times to each of 21 points on five different BTrackS Balance Plate (BBP) devices with a hex-nose plunger. Results showed excellent agreement (ICC > 0.999) between the POAs and measured COP by the BBP devices, as well as high accuracy ( 0.999) providing evidence of almost perfect inter-device reliability. Taken together, these results provide an important, static corollary to the previously obtained dynamic COP results from inverted pendulum testing of the BBP. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Adaptive management of energy consumption, reliability and delay of wireless sensor node: Application to IEEE 802.15.4 wireless sensor node.

    Science.gov (United States)

    Kone, Cheick Tidjane; Mathias, Jean-Denis; De Sousa, Gil

    2017-01-01

    Designing a Wireless Sensor Network (WSN) to achieve a high Quality of Service (QoS) (network performance and durability) is a challenging problem. We address it by focusing on the performance of the 802.15.4 communication protocol because the IEEE 802.15.4 Standard is actually considered as one of the reference technologies in WSNs. In this paper, we propose to control the sustainable use of resources (i.e., energy consumption, reliability and timely packet transmission) of a wireless sensor node equipped with photovoltaic cells by an adaptive tuning not only of the MAC (Medium Access Control) parameters but also of the sampling frequency of the node. To do this, we use one of the existing control approaches, namely the viability theory, which aims to preserve the functions and the controls of a dynamic system in a set of desirable states. So, an analytical model, describing the evolution over time of nodal resources, is derived and used by a viability algorithm for the adaptive tuning of the IEEE 802.15.4 MAC protocol. The simulation analysis shows that our solution allows ensuring indefinitely, in the absence of hardware failure, the operations (lifetime duration, reliability and timely packet transmission) of an 802.15.4 WSN and one can temporarily increase the sampling frequency of the node beyond the regular sampling one. This latter brings advantages for agricultural and environmental applications such as precision agriculture, flood or fire prevention. Main results show that our current approach enable to send more information when critical events occur without the node runs out of energy. Finally, we argue that our approach is generic and can be applied to other types of WSN.

  6. Seizure-Onset Mapping Based on Time-Variant Multivariate Functional Connectivity Analysis of High-Dimensional Intracranial EEG: A Kalman Filter Approach.

    Science.gov (United States)

    Lie, Octavian V; van Mierlo, Pieter

    2017-01-01

    The visual interpretation of intracranial EEG (iEEG) is the standard method used in complex epilepsy surgery cases to map the regions of seizure onset targeted for resection. Still, visual iEEG analysis is labor-intensive and biased due to interpreter dependency. Multivariate parametric functional connectivity measures using adaptive autoregressive (AR) modeling of the iEEG signals based on the Kalman filter algorithm have been used successfully to localize the electrographic seizure onsets. Due to their high computational cost, these methods have been applied to a limited number of iEEG time-series (Kalman filter implementations, a well-known multivariate adaptive AR model (Arnold et al. 1998) and a simplified, computationally efficient derivation of it, for their potential application to connectivity analysis of high-dimensional (up to 192 channels) iEEG data. When used on simulated seizures together with a multivariate connectivity estimator, the partial directed coherence, the two AR models were compared for their ability to reconstitute the designed seizure signal connections from noisy data. Next, focal seizures from iEEG recordings (73-113 channels) in three patients rendered seizure-free after surgery were mapped with the outdegree, a graph-theory index of outward directed connectivity. Simulation results indicated high levels of mapping accuracy for the two models in the presence of low-to-moderate noise cross-correlation. Accordingly, both AR models correctly mapped the real seizure onset to the resection volume. This study supports the possibility of conducting fully data-driven multivariate connectivity estimations on high-dimensional iEEG datasets using the Kalman filter approach.

  7. Human Reliability Analysis. Applicability of the HRA-concept in maintenance shutdown; Analys av maensklig tillfoerlitlighet. HRA-begreppets tillaempbarhet vid revisionsavstaellning

    Energy Technology Data Exchange (ETDEWEB)

    Obenius, Aino (MTO Psykologi AB, Stockholm (SE))

    2007-08-15

    monotonous work tasks. Errors and mistakes during this plant operating state may have severe consequences, both on the immediate work, as well as on the future power production. The human influence on the technical system is of great importance when analysing the LPSD condition. This should also affect the basis and performance of the analysis, to make as a realistic analysis as possible. When analysing human operation during LPSD, a holistic perspective should be used. A way to take the human abilities and performance variability into consideration is important. The study of performed analysis of human reliability for the LPSD condition shows, that the normative and/or descriptive approach and the linear cause-effect model are used. The main objective of HRAs performed within SPSAs is the quantification of human interaction and error frequency. Modelling of human behaviour in complex, sociotechnical systems differs in theory and practice. A reason may be that models as the one for functional resonance, not yet are applicable for practising analysts, due to a lack of well tried methods and the fact that analysis of the LPSD condition is performed in the PSA concept, which defines the type of results sought from the HRA, i.e. probabilities for human error. LPSD analysis methods need to be further evaluated, validated and developed. The basis for the analysis should, instead of PSA, be a holistic analysis according to how Man, Technology and Organization affect the system and plant safety. To achieve this, further activities could be to perform an in-depth study of existing analysis of the LPSD condition, to develop specifications of requirement for LPSD analysis, to further validate the HRA work process as well as to further develop practically applicable methods for human performance and variability analysis in sociotechnical systems

  8. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  9. Reliability and radiation effects in compound semiconductors

    CERN Document Server

    Johnston, Allan

    2010-01-01

    This book discusses reliability and radiation effects in compound semiconductors, which have evolved rapidly during the last 15 years. Johnston's perspective in the book focuses on high-reliability applications in space, but his discussion of reliability is applicable to high reliability terrestrial applications as well. The book is important because there are new reliability mechanisms present in compound semiconductors that have produced a great deal of confusion. They are complex, and appear to be major stumbling blocks in the application of these types of devices. Many of the reliability problems that were prominent research topics five to ten years ago have been solved, and the reliability of many of these devices has been improved to the level where they can be used for ten years or more with low failure rates. There is also considerable confusion about the way that space radiation affects compound semiconductors. Some optoelectronic devices are so sensitive to damage in space that they are very difficu...

  10. Accelerator reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D

    2002-07-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  11. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  12. Accelerator reliability workshop

    International Nuclear Information System (INIS)

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.

    2002-01-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop

  13. Reliability issues at the LHC

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit; Gillies, James D

    2002-01-01

    The Lectures on reliability issues at the LHC will be focused on five main Modules on five days. Module 1: Basic Elements in Reliability Engineering Some basic terms, definitions and methods, from components up to the system and the plant, common cause failures and human factor issues. Module 2: Interrelations of Reliability & Safety (R&S) Reliability and risk informed approach, living models, risk monitoring. Module 3: The ideal R&S Process for Large Scale Systems From R&S goals via the implementation into the system to the proof of the compliance. Module 4: Some Applications of R&S on LHC Master logic, anatomy of risk, cause - consequence diagram, decomposition and aggregation of the system. Module 5: Lessons learned from R&S Application in various Technologies Success stories, pitfalls, constrains in data and methods, limitations per se, experienced in aviation, space, process, nuclear, offshore and transport systems and plants. The Lectures will reflect in summary the compromise in...

  14. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  15. Reliability evaluation of power systems

    CERN Document Server

    Billinton, Roy

    1996-01-01

    The Second Edition of this well-received textbook presents over a decade of new research in power system reliability-while maintaining the general concept, structure, and style of the original volume. This edition features new chapters on the growing areas of Monte Carlo simulation and reliability economics. In addition, chapters cover the latest developments in techniques and their application to real problems. The text also explores the progress occurring in the structure, planning, and operation of real power systems due to changing ownership, regulation, and access. This work serves as a companion volume to Reliability Evaluation of Engineering Systems: Second Edition (1992).

  16. Aerospace reliability applied to biomedicine.

    Science.gov (United States)

    Lalli, V. R.; Vargo, D. J.

    1972-01-01

    An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.

  17. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  18. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    2003-01-01

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  19. Development of improved processing and evaluation methods for high reliability structural ceramics for advanced heat engine applications, Phase 1. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Pujari, V.K.; Tracey, D.M.; Foley, M.R.; Paille, N.I.; Pelletier, P.J.; Sales, L.C.; Wilkens, C.A.; Yeckley, R.L. [Norton Co., Northboro, MA (United States)

    1993-08-01

    The program goals were to develop and demonstrate significant improvements in processing methods, process controls and non-destructive evaluation (NDE) which can be commercially implemented to produce high reliability silicon nitride components for advanced heat engine applications at temperatures to 1,370{degrees}C. The program focused on a Si{sub 3}N{sub 4}-4% Y{sub 2}O{sub 3} high temperature ceramic composition and hot-isostatic-pressing as the method of densification. Stage I had as major objectives: (1) comparing injection molding and colloidal consolidation process routes, and selecting one route for subsequent optimization, (2) comparing the performance of water milled and alcohol milled powder and selecting one on the basis of performance data, and (3) adapting several NDE methods to the needs of ceramic processing. The NDE methods considered were microfocus X-ray radiography, computed tomography, ultrasonics, NMR imaging, NMR spectroscopy, fluorescent liquid dye penetrant and X-ray diffraction residual stress analysis. The colloidal consolidation process route was selected and approved as the forming technique for the remainder of the program. The material produced by the final Stage II optimized process has been given the designation NCX 5102 silicon nitride. According to plan, a large number of specimens were produced and tested during Stage III to establish a statistically robust room temperature tensile strength database for this material. Highlights of the Stage III process demonstration and resultant database are included in the main text of the report, along with a synopsis of the NCX-5102 aqueous based colloidal process. The R and D accomplishments for Stage I are discussed in Appendices 1--4, while the tensile strength-fractography database for the Stage III NCX-5102 process demonstration is provided in Appendix 5. 4 refs., 108 figs., 23 tabs.

  20. Tutorial on use of intraclass correlation coefficients for assessing intertest reliability and its application in functional near-infrared spectroscopy-based brain imaging.

    Science.gov (United States)

    Li, Lin; Zeng, Li; Lin, Zi-Jing; Cazzell, Mary; Liu, Hanli

    2015-05-01

    Test-retest reliability of neuroimaging measurements is an important concern in the investigation of cognitive functions in the human brain. To date, intraclass correlation coefficients (ICCs), originally used in interrater reliability studies in behavioral sciences, have become commonly used metrics in reliability studies on neuroimaging and functional near-infrared spectroscopy (fNIRS). However, as there are six popular forms of ICC, the adequateness of the comprehensive understanding of ICCs will affect how one may appropriately select, use, and interpret ICCs toward a reliability study. We first offer a brief review and tutorial on the statistical rationale of ICCs, including their underlying analysis of variance models and technical definitions, in the context of assessment on intertest reliability. Second, we provide general guidelines on the selection and interpretation of ICCs. Third, we illustrate the proposed approach by using an actual research study to assess interest reliability of fNIRS-based, volumetric diffuse optical tomography of brain activities stimulated by a risk decision-making protocol. Last, special issues that may arise in reliability assessment using ICCs are discussed and solutions are suggested.

  1. Tutorial on use of intraclass correlation coefficients for assessing intertest reliability and its application in functional near-infrared spectroscopy-based brain imaging

    Science.gov (United States)

    Li, Lin; Zeng, Li; Lin, Zi-Jing; Cazzell, Mary; Liu, Hanli

    2015-05-01

    Test-retest reliability of neuroimaging measurements is an important concern in the investigation of cognitive functions in the human brain. To date, intraclass correlation coefficients (ICCs), originally used in inter-rater reliability studies in behavioral sciences, have become commonly used metrics in reliability studies on neuroimaging and functional near-infrared spectroscopy (fNIRS). However, as there are six popular forms of ICC, the adequateness of the comprehensive understanding of ICCs will affect how one may appropriately select, use, and interpret ICCs toward a reliability study. We first offer a brief review and tutorial on the statistical rationale of ICCs, including their underlying analysis of variance models and technical definitions, in the context of assessment on intertest reliability. Second, we provide general guidelines on the selection and interpretation of ICCs. Third, we illustrate the proposed approach by using an actual research study to assess intertest reliability of fNIRS-based, volumetric diffuse optical tomography of brain activities stimulated by a risk decision-making protocol. Last, special issues that may arise in reliability assessment using ICCs are discussed and solutions are suggested.

  2. Reliable design of electronic equipment an engineering guide

    CERN Document Server

    Natarajan, Dhanasekharan

    2014-01-01

    This book explains reliability techniques with examples from electronics design for the benefit of engineers. It presents the application of de-rating, FMEA, overstress analyses and reliability improvement tests for designing reliable electronic equipment. Adequate information is provided for designing computerized reliability database system to support the application of the techniques by designers. Pedantic terms and the associated mathematics of reliability engineering discipline are excluded for the benefit of comprehensiveness and practical applications. This book offers excellent support

  3. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  4. Set-up of a System to Reliably Measure the Startle Response in Marmoset Monkeys; Application in Animal Models of Anxiety and Psychosis

    National Research Council Canada - National Science Library

    Meichers, B.P

    1998-01-01

    .... In addition, the startle response is increased during periods of anxiety. In this study, a system is described by which the acoustic startle response in marmoset monkeys may be recorded in a reliable way...

  5. Analysis of human reliability in the APS of fire. Application of NUREG-1921; Analisis de Fiabilidad Humana en el APS de Incendios. Aplicacion del NUREG-1921

    Energy Technology Data Exchange (ETDEWEB)

    Perez Torres, J. L.; Celaya Meler, M. A.

    2014-07-01

    An analysis of human reliability in a probabilistic safety analysis (APS) of fire aims to identify, describe, analyze and quantify, in a manner traceable, human actions that can affect the mitigation of an initiating event produced by a fire. (Author)

  6. Business of reliability

    Science.gov (United States)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  7. AMSAA Reliability Growth Guide

    National Research Council Canada - National Science Library

    Broemm, William

    2000-01-01

    ... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.

  8. Calculating system reliability with SRFYDO

    Energy Technology Data Exchange (ETDEWEB)

    Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  9. Thermal cycling reliability of Cu/SnAg double-bump flip chip assemblies for 100 μm pitch applications

    Science.gov (United States)

    Son, Ho-Young; Kim, Ilho; Lee, Soon-Bok; Jung, Gi-Jo; Park, Byung-Jin; Paik, Kyung-Wook

    2009-01-01

    A thick Cu column based double-bump flip chip structure is one of the promising alternatives for fine pitch flip chip applications. In this study, the thermal cycling (T/C) reliability of Cu/SnAg double-bump flip chip assemblies was investigated, and the failure mechanism was analyzed through the correlation of T/C test and the finite element analysis (FEA) results. After 1000 thermal cycles, T/C failures occurred at some Cu/SnAg bumps located at the edge and corner of chips. Scanning acoustic microscope analysis and scanning electron microscope observations indicated that the failure site was the Cu column/Si chip interface. It was identified by a FEA where the maximum stress concentration was located during T/C. During T/C, the Al pad between the Si chip and a Cu column bump was displaced due to thermomechanical stress. Based on the low cycle fatigue model, the accumulation of equivalent plastic strain resulted in thermal fatigue deformation of the Cu column bumps and ultimately reduced the thermal cycling lifetime. The maximum equivalent plastic strains of some bumps at the chip edge increased with an increased number of thermal cycles. However, equivalent plastic strains of the inner bumps did not increase regardless of the number of thermal cycles. In addition, the z-directional normal plastic strain ɛ22 was determined to be compressive and was a dominant component causing the plastic deformation of Cu/SnAg double bumps. As the number of thermal cycles increased, normal plastic strains in the perpendicular direction to the Si chip and shear strains were accumulated on the Cu column bumps at the chip edge at low temperature region. Thus it was found that the Al pad at the Si chip/Cu column interface underwent thermal fatigue deformation by compressive normal strain and the contact loss by displacement failure of the Al pad, the main T/C failure mode of the Cu/SnAg flip chip assembly, then occurred at the Si chip/Cu column interface shear strain deformation

  10. As reliable as the sun

    Science.gov (United States)

    Leijtens, J. A. P.

    2017-11-01

    Fortunately there is almost nothing as reliable as the sun which can consequently be utilized as a very reliable source of spacecraft power. In order to harvest this power, the solar panels have to be pointed towards the sun as accurately and reliably as possible. To this extend, sunsensors are available on almost every satellite to support vital sun-pointing capability throughout the mission, even in the deployment and save mode phases of the satellites life. Given the criticality of the application one would expect that after more than 50 years of sun sensor utilisation, such sensors would be fully matured and optimised. In actual fact though, the majority of sunsensors employed are still coarse sunsensors which have a proven extreme reliability but present major issues regarding albedo sensitivity and pointing accuracy.

  11. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  12. Reliability and concurrent validity of the iPhone® Compass application to measure thoracic rotation range of motion (ROM) in healthy participants

    Science.gov (United States)

    Schram, Ben; Cox, Alistair J.; Anderson, Sarah L.; Keogh, Justin

    2018-01-01

    Background Several water-based sports (swimming, surfing and stand up paddle boarding) require adequate thoracic mobility (specifically rotation) in order to perform the appropriate activity requirements. The measurement of thoracic spine rotation is problematic for clinicians due to a lack of convenient and reliable measurement techniques. More recently, smartphones have been used to quantify movement in various joints in the body; however, there appears to be a paucity of research using smartphones to assess thoracic spine movement. Therefore, the aim of this study is to determine the reliability (intra and inter rater) and validity of the iPhone® app (Compass) when assessing thoracic spine rotation ROM in healthy individuals. Methods A total of thirty participants were recruited for this study. Thoracic spine rotation ROM was measured using both the current clinical gold standard, a universal goniometer (UG) and the Smart Phone Compass app. Intra-rater and inter-rater reliability was determined with a Intraclass Correlation Coefficient (ICC) and associated 95% confidence intervals (CI). Validation of the Compass app in comparison to the UG was measured using Pearson’s correlation coefficient and levels of agreement were identified with Bland–Altman plots and 95% limits of agreement. Results Both the UG and Compass app measurements both had excellent reproducibility for intra-rater (ICC 0.94–0.98) and inter-rater reliability (ICC 0.72–0.89). However, the Compass app measurements had higher intra-rater reliability (ICC = 0.96 − 0.98; 95% CI [0.93–0.99]; vs. ICC = 0.94 − 0.98; 95% CI [0.88–0.99]) and inter-rater reliability (ICC = 0.87 − 0.89; 95% CI [0.74–0.95] vs. ICC = 0.72 − 0.82; 95% CI [0.21–0.94]). A strong and significant correlation was found between the UG and the Compass app, demonstrating good concurrent validity (r = 0.835, p reliable tool for measuring thoracic spine rotation which produces greater

  13. Improving the Reliability and Modal Stability of High Power 870 nm AlGaAs CSP Laser Diodes for Applications to Free Space Communication Systems

    Science.gov (United States)

    Connolly, J. C.; Alphonse, G. A.; Carlin, D. B.; Ettenberg, M.

    1991-01-01

    The operating characteristics (power-current, beam divergence, etc.) and reliability assessment of high-power CSP lasers is discussed. The emission wavelength of these lasers was optimized at 860 to 880 nm. The operational characteristics of a new laser, the inverse channel substrate planar (ICSP) laser, grown by metalorganic chemical vapor deposition (MOCVD), is discussed and the reliability assessment of this laser is reported. The highlights of this study include a reduction in the threshold current value for the laser to 15 mA and a degradation rate of less than 2 kW/hr for the lasers operating at 60 mW of peak output power.

  14. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  15. Microprocessor hardware reliability

    Energy Technology Data Exchange (ETDEWEB)

    Wright, R I

    1982-01-01

    Microprocessor-based technology has had an impact in nearly every area of industrial electronics and many applications have important safety implications. Microprocessors are being used for the monitoring and control of hazardous processes in the chemical, oil and power generation industries, for the control and instrumentation of aircraft and other transport systems and for the control of industrial machinery. Even in the field of nuclear reactor protection, where designers are particularly conservative, microprocessors are used to implement certain safety functions and may play increasingly important roles in protection systems in the future. Where microprocessors are simply replacing conventional hard-wired control and instrumentation systems no new hazards are created by their use. In the field of robotics, however, the microprocessor has opened up a totally new technology and with it has created possible new and as yet unknown hazards. The paper discusses some of the design and manufacturing techniques which may be used to enhance the reliability of microprocessor based systems and examines the available reliability data on lsi/vlsi microcircuits. 12 references.

  16. Post Curing as an Effective Means of Ensuring the Long-term Reliability of PDMS Thin Films for Dielectric Elastomer Applications

    DEFF Research Database (Denmark)

    Zakaria, Shamsul Bin; Madsen, Frederikke Bahrt; Skov, Anne Ladegaard

    2017-01-01

    ’s moduli at 5% strain increase with post curing. Furthermore, the determined dielectric breakdown parameters from Weibull analyses showed that greater electrical stability and reliability could be achieved by post curing the PDMS films before usage, and this method therefore paves a way toward more...

  17. Validity and Reliability of the Hebrew Version of the SpREUK Questionnaire for Religiosity, Spirituality and Health: An Application for Oral Diseases

    Directory of Open Access Journals (Sweden)

    Harold D. Sgan-Cohen

    2010-12-01

    Full Text Available Background: Research has examined the connection between religiosity, spirituality (SpR and health, and the potential of these variables to prevent, heal and cope with disease. Research indicated that participation in religious meetings or services was associated with a lower risk of developing oral disease. We intended to test a Hebrew version of the SpREUK 1.1 questionnaire, which is reported to be a reliable and valid measure of distinctive issues of SpR, and to test its relevance in the context of oral illness among a Jewish population. Methods: In order to validate the SpREUK-Hebrew instrument, minor translational and cultural/religious adaptations were applied. Reliability and factor analyses were performed, using standard procedures, among 134 Jewish Israeli subjects (mean age 38.4 years. Results: Analysis of reliability for internal consistency demonstrated an intra-class correlation of Cronbach's alpha = 0.90 for the intrinsic religiosity/spiritual and the appraisal scales, and of 0.90 for the support through spirituality/religiosity scales. Inter reliability agreement by kappa ranged between 0.7 and 0.9. We were able to approve the previously described factorial structure, albeit with some unique characteristics in the Jewish population. Individuals´ time spent on spiritual activity correlated with the SpREUK scales. The instrument discriminated well between religious subgroups (i.e., ultra Orthodox, conventional religious and less-religious. Preliminary results indicate an association between measures of spirituality and oral health. Conclusions: The traditional and cultural adaptation of the tool was found to be appropriate. SpREUK-Hebrew was reliable and valid among a Jewish population. This method could therefore be employed in comparative studies among different cultural and religious backgrounds.

  18. Reliability data banks

    International Nuclear Information System (INIS)

    Cannon, A.G.; Bendell, A.

    1991-01-01

    Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)

  19. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  20. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  1. Metrological Reliability of Medical Devices

    Science.gov (United States)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  2. Photovoltaic power system reliability considerations

    Science.gov (United States)

    Lalli, V. R.

    1980-01-01

    This paper describes an example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems. This particular application was for a solar cell power system demonstration project in Tangaye, Upper Volta, Africa. The techniques involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of a fail-safe and planned spare parts engineering philosophy.

  3. Reliability and concurrent validity of the iPhone® Compass application to measure thoracic rotation range of motion (ROM in healthy participants

    Directory of Open Access Journals (Sweden)

    James Furness

    2018-03-01

    Full Text Available Background Several water-based sports (swimming, surfing and stand up paddle boarding require adequate thoracic mobility (specifically rotation in order to perform the appropriate activity requirements. The measurement of thoracic spine rotation is problematic for clinicians due to a lack of convenient and reliable measurement techniques. More recently, smartphones have been used to quantify movement in various joints in the body; however, there appears to be a paucity of research using smartphones to assess thoracic spine movement. Therefore, the aim of this study is to determine the reliability (intra and inter rater and validity of the iPhone® app (Compass when assessing thoracic spine rotation ROM in healthy individuals. Methods A total of thirty participants were recruited for this study. Thoracic spine rotation ROM was measured using both the current clinical gold standard, a universal goniometer (UG and the Smart Phone Compass app. Intra-rater and inter-rater reliability was determined with a Intraclass Correlation Coefficient (ICC and associated 95% confidence intervals (CI. Validation of the Compass app in comparison to the UG was measured using Pearson’s correlation coefficient and levels of agreement were identified with Bland–Altman plots and 95% limits of agreement. Results Both the UG and Compass app measurements both had excellent reproducibility for intra-rater (ICC 0.94–0.98 and inter-rater reliability (ICC 0.72–0.89. However, the Compass app measurements had higher intra-rater reliability (ICC = 0.96 − 0.98; 95% CI [0.93–0.99]; vs. ICC = 0.94 − 0.98; 95% CI [0.88–0.99] and inter-rater reliability (ICC = 0.87 − 0.89; 95% CI [0.74–0.95] vs. ICC = 0.72 − 0.82; 95% CI [0.21–0.94]. A strong and significant correlation was found between the UG and the Compass app, demonstrating good concurrent validity (r = 0.835, p < 0.001. Levels of agreement between the two devices were 24.8° (LoA –9

  4. The Use of Questionnaires in Safety Culture Studies in High Reliability Organizations. Literature Review and an Application in the Spanish Nuclear Sector

    International Nuclear Information System (INIS)

    German, S.; Navajas, J.; Silla, I.

    2014-01-01

    This report examines two aspects related to the use of questionnaires in safety culture research conducted in high reliability organizations. First, a literature review of recent studies that address safety culture through questionnaires is presented. Literature review showed that most studies used only questionnaires as a research technique, were cross-sectional, applied paper-based questionnaires, and were conducted in one type of high reliability organization. Second, a research project on safety culture that used electronic surveys in a sample of experts on safety culture is discussed. This project, developed by CISOT-CIEMAT research institute, was carry out in the Spanish nuclear sector and illustrates relevant aspects of the methodological design and administration processes that must be considered to encourage participation in the study.. (Author)

  5. EMG normalization method based on grade 3 of manual muscle testing: Within- and between-day reliability of normalization tasks and application to gait analysis.

    Science.gov (United States)

    Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane

    2018-02-01

    Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Structural Reliability Analysis of Wind Turbines: A Review

    Directory of Open Access Journals (Sweden)

    Zhiyu Jiang

    2017-12-01

    Full Text Available The paper presents a detailed review of the state-of-the-art research activities on structural reliability analysis of wind turbines between the 1990s and 2017. We describe the reliability methods including the first- and second-order reliability methods and the simulation reliability methods and show the procedure for and application areas of structural reliability analysis of wind turbines. Further, we critically review the various structural reliability studies on rotor blades, bottom-fixed support structures, floating systems and mechanical and electrical components. Finally, future applications of structural reliability methods to wind turbine designs are discussed.

  7. Reliability tasks from prediction to field use

    International Nuclear Information System (INIS)

    Guyot, Christian.

    1975-01-01

    This tutorial paper is part of a series intended to sensitive on reliability prolems. Reliability probabilistic concept, is an important parameter of availability. Reliability prediction is an estimation process for evaluating design progress. It is only by the application of a reliability program that reliability objectives can be attained through the different stages of work: conception, fabrication, field use. The user is mainly interested in operational reliability. Indication are given on the support and the treatment of data in the case of electronic equipment at C.E.A. Reliability engineering requires a special state of mind which must be formed and developed in a company in the same way as it may be done for example for safety [fr

  8. Reliability and optimization of structural systems

    International Nuclear Information System (INIS)

    Thoft-Christensen, P.

    1987-01-01

    The proceedings contain 28 papers presented at the 1st working conference. The working conference was organized by the IFIP Working Group 7.5. The proceedings also include 4 papers which were submitted, but for various reasons not presented at the working conference. The working conference was attended by 50 participants from 18 countries. The conference was the first scientific meeting of the new IFIP Working Group 7.5 on 'Reliability and Optimization of Structural Systems'. The purpose of the Working Group 7.5 is to promote modern structural system optimization and reliability theory, to advance international cooperation in the field of structural system optimization and reliability theory, to stimulate research, development and application of structural system optimization and reliability theory, to further the dissemination and exchange of information on reliability and optimization of structural system optimization and reliability theory, and to encourage education in structural system optimization and reliability theory. (orig./HP)

  9. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  10. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  11. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  12. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  13. Reliable Design Versus Trust

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  14. Pocket Handbook on Reliability

    Science.gov (United States)

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  15. Reliability in engineering '87

    International Nuclear Information System (INIS)

    Tuma, M.

    1987-01-01

    The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)

  16. Reliability Based Ship Structural Design

    DEFF Research Database (Denmark)

    Dogliani, M.; Østergaard, C.; Parmentier, G.

    1996-01-01

    This paper deals with the development of different methods that allow the reliability-based design of ship structures to be transferred from the area of research to the systematic application in current design. It summarises the achievements of a three-year collaborative research project dealing...... with developments of models of load effects and of structural collapse adopted in reliability formulations which aim at calibrating partial safety factors for ship structural design. New probabilistic models of still-water load effects are developed both for tankers and for containerships. New results are presented...... structure of several tankers and containerships. The results of the reliability analysis were the basis for the definition of a target safety level which was used to asses the partial safety factors suitable for in a new design rules format to be adopted in modern ship structural design. Finally...

  17. Component reliability for electronic systems

    CERN Document Server

    Bajenescu, Titu-Marius I

    2010-01-01

    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  18. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  19. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  20. Human factor reliability program

    International Nuclear Information System (INIS)

    Knoblochova, L.

    2017-01-01

    The human factor's reliability program was at Slovenske elektrarne, a.s. (SE) nuclear power plants. introduced as one of the components Initiatives of Excellent Performance in 2011. The initiative's goal was to increase the reliability of both people and facilities, in response to 3 major areas of improvement - Need for improvement of the results, Troubleshooting support, Supporting the achievement of the company's goals. The human agent's reliability program is in practice included: - Tools to prevent human error; - Managerial observation and coaching; - Human factor analysis; -Quick information about the event with a human agent; -Human reliability timeline and performance indicators; - Basic, periodic and extraordinary training in human factor reliability(authors)