WorldWideScience

Sample records for faults robustness evaluation

  1. Robust Parametric Fault Estimation in a Hopper System

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2012-01-01

    The ability of diagnosis of the possible faults is a necessity for satellite launch vehicles during their mission. In this paper, a structural analysis method is employed to divide the complex propulsion system into simpler subsystems for fault diagnosis filter design. A robust fault diagnosis me...

  2. Optimal Robust Fault Detection for Linear Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Nike Liu

    2008-01-01

    Full Text Available This paper considers robust fault-detection problems for linear discrete time systems. It is shown that the optimal robust detection filters for several well-recognized robust fault-detection problems, such as ℋ−/ℋ∞, ℋ2/ℋ∞, and ℋ∞/ℋ∞ problems, are the same and can be obtained by solving a standard algebraic Riccati equation. Optimal filters are also derived for many other optimization criteria and it is shown that some well-studied and seeming-sensible optimization criteria for fault-detection filter design could lead to (optimal but useless fault-detection filters.

  3. Robust Fault Diagnosis Design for Linear Multiagent Systems with Incipient Faults

    Directory of Open Access Journals (Sweden)

    Jingping Xia

    2015-01-01

    Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

  4. Robustness to Faults Promotes Evolvability: Insights from Evolving Digital Circuits.

    Science.gov (United States)

    Milano, Nicola; Nolfi, Stefano

    2016-01-01

    We demonstrate how the need to cope with operational faults enables evolving circuits to find more fit solutions. The analysis of the results obtained in different experimental conditions indicates that, in absence of faults, evolution tends to select circuits that are small and have low phenotypic variability and evolvability. The need to face operation faults, instead, drives evolution toward the selection of larger circuits that are truly robust with respect to genetic variations and that have a greater level of phenotypic variability and evolvability. Overall our results indicate that the need to cope with operation faults leads to the selection of circuits that have a greater probability to generate better circuits as a result of genetic variation with respect to a control condition in which circuits are not subjected to faults.

  5. Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.

    Science.gov (United States)

    Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun

    2017-10-03

    This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.

  6. Robust Mpc for Actuator–Fault Tolerance Using Set–Based Passive Fault Detection and Active Fault Isolation

    Directory of Open Access Journals (Sweden)

    Xu Feng

    2017-03-01

    Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

  7. Robust fault detection in open loop vs. closed loop

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.

    1997-01-01

    The robustness aspects of fault detection and isolation (FDI) for uncertain systems are considered. The FDI problem is considered in a standard problem formulation. The FDI design problem is analyzed both in the case where the control input signal is considered as a known external input signal (o...... (open loop) and when the input signal is generated by a feedback controller...

  8. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...

  9. Particle Filter for Fault Diagnosis and Robust Navigation of Underwater Robot

    DEFF Research Database (Denmark)

    Zhao, Bo; Skjetne, Roger; Blanke, Mogens

    2014-01-01

    A particle filter based robust navigation with fault diagnosis is designed for an underwater robot, where 10 failure modes of sensors and thrusters are considered. The nominal underwater robot and its anomaly are described by a switchingmode hidden Markov model. By extensively running a particle...... filter on the model, the fault diagnosis and robust navigation are achieved. Closed-loop full-scale experimental results show that the proposed method is robust, can diagnose faults effectively, and can provide good state estimation even in cases where multiple faults occur. Comparing with other methods...

  10. Study of the intelligent control robustness with respect to radiations induced faults

    International Nuclear Information System (INIS)

    Cheynet, Ph.

    1999-01-01

    The so-called intelligent control techniques, such as Artificial Neural Networks and Fuzzy Logic, are considered as being potentially robust. Their digital implementation gives compact and powerful solutions to some problems difficult to be tackled by classical techniques. Such approaches might be used for applications working in harsh environment (nuclear and space). The aim of this thesis is to study the robustness of artificial neural networks and fuzzy logic against Single Event Upset faults, in order to evaluate their viability and their efficiency for onboard spacecraft processes. A set of experiments have been performed on a neural network and a fuzzy controller, both implementing real space applications: texture analysis from satellite images and wheel control of a martian rover. An original method, allowing to increase the recognition rate of any artificial neural network has been developed and used on the studied network. Digital architectures implementing the two studied techniques in this thesis, have been boarded on two scientific satellites. One is in flight since one year, the other will be launched in the end of 1999. Obtained results, both from software simulations, hardware fault injections or particle accelerator tests, show that intelligent control techniques have a significant robustness against Single Event Upset faults. Data issued from the flight experiment confirm these properties, showing that some onboard spacecraft processes can be reliably executed by digital artificial neural networks. (author)

  11. Fault-tolerant architecture: Evaluation methodology

    International Nuclear Information System (INIS)

    Battle, R.E.; Kisner, R.A.

    1992-08-01

    The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems

  12. Robust Fault Detection for a Class of Uncertain Nonlinear Systems Based on Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Bingyong Yan

    2015-01-01

    Full Text Available A robust fault detection scheme for a class of nonlinear systems with uncertainty is proposed. The proposed approach utilizes robust control theory and parameter optimization algorithm to design the gain matrix of fault tracking approximator (FTA for fault detection. The gain matrix of FTA is designed to minimize the effects of system uncertainty on residual signals while maximizing the effects of system faults on residual signals. The design of the gain matrix of FTA takes into account the robustness of residual signals to system uncertainty and sensitivity of residual signals to system faults simultaneously, which leads to a multiobjective optimization problem. Then, the detectability of system faults is rigorously analyzed by investigating the threshold of residual signals. Finally, simulation results are provided to show the validity and applicability of the proposed approach.

  13. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, althou...

  14. Development of methods for evaluating active faults

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The report for long-term evaluation of active faults was published by the Headquarters for Earthquake Research Promotion on Nov. 2010. After occurrence of the 2011 Tohoku-oki earthquake, the safety review guide with regard to geology and ground of site was revised by the Nuclear Safety Commission on Mar. 2012 with scientific knowledges of the earthquake. The Nuclear Regulation Authority established on Sep. 2012 is newly planning the New Safety Design Standard related to Earthquakes and Tsunamis of Light Water Nuclear Power Reactor Facilities. With respect to those guides and standards, our investigations for developing the methods of evaluating active faults are as follows; (1) For better evaluation on activities of offshore fault, we proposed a work flow to date marine terrace (indicator for offshore fault activity) during the last 400,000 years. We also developed the analysis of fault-related fold for evaluating of blind fault. (2) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (3) To reduce uncertainties of fault activities and frequency of earthquakes, we compiled the survey data and possible errors. (4) For improving seismic hazard analysis, we compiled the fault activities of the Yunotake and Itozawa faults, induced by the 2011 Tohoku-oki earthquake. (author)

  15. Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Feten Gannouni

    2017-01-01

    Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.

  16. Robust filtering and fault detection of switched delay systems

    CERN Document Server

    Wang, Dong; Wang, Wei

    2013-01-01

    Switched delay systems appear in a wide field of applications including networked control systems, power systems, memristive systems. Though the large amount of ideas with respect to such systems have generated, until now, it still lacks a framework to focus on filter design and fault detection issues which are relevant to life safety and property loss. Beginning with the comprehensive coverage of the new developments in the analysis and control synthesis for switched delay systems, the monograph not only provides a systematic approach to designing the filter and detecting the fault of switched delay systems, but it also covers the model reduction issues. Specific topics covered include: (1) Arbitrary switching signal where delay-independent and delay-dependent conditions are presented by proposing a linearization technique. (2) Average dwell time where a weighted Lyapunov function is come up with dealing with filter design and fault detection issues beside taking model reduction problems. The monograph is in...

  17. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  18. Robust fault-sensitive synchronization of a class of nonlinear systems

    International Nuclear Information System (INIS)

    Xu Shi-Yun; Tang Yong; Sun Hua-Dong; Yang Ying; Liu Xian

    2011-01-01

    Aiming at enhancing the quality as well as the reliability of synchronization, this paper is concerned with the fault detection issue within the synchronization process for a class of nonlinear systems in the existence of external disturbances. To handle such problems, the concept of robust fault-sensitive (RFS) synchronization is proposed, and a method of determining such a kind of synchronization is developed. Under the framework of RFS synchronization, the master and the slave systems are robustly synchronized, and at the same time, sensitive to possible faults based on a mixed H − /H ∞ performance. The design of desired output feedback controller is realized by solving a linear matrix inequality, and the fault sensitivity H − index can be optimized via a convex optimization algorithm. A master-slave configuration composed of identical Chua's circuits is adopted as a numerical example to demonstrate the effectiveness and applicability of the analytical results. (general)

  19. A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.

    Science.gov (United States)

    Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent

    2017-01-01

    In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2009-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure....

  1. Robust fault detection in bond graph framework using interval analysis and Fourier-Motzkin elimination technique

    Science.gov (United States)

    Jha, Mayank Shekhar; Chatti, Nizar; Declerck, Philippe

    2017-09-01

    This paper addresses the fault diagnosis problem of uncertain systems in the context of Bond Graph modelling technique. The main objective is to enhance the fault detection step based on Interval valued Analytical Redundancy Relations (named I-ARR) in order to overcome the problems related to false alarms, missed alarms and robustness issues. These I-ARRs are a set of fault indicators that generate the interval bounds called thresholds. A fault is detected once the nominal residuals (point valued part of I-ARRs) exceed the thresholds. However, the existing fault detection method is limited to parametric faults and it presents various limitations with regards to estimation of measurement signal derivatives, to which I-ARRs are sensitive. The novelties and scientific interest of the proposed methodology are: (1) to improve the accuracy of the measurements derivatives estimation by using a dedicated sliding mode differentiator proposed in this work, (2) to suitably integrate the Fourier-Motzkin Elimination (FME) technique within the I-ARRs based diagnosis so that measurements faults can be detected successfully. The latter provides interval bounds over the derivatives which are included in the thresholds. The proposed methodology is studied under various scenarios (parametric and measurement faults) via simulations over a mechatronic torsion bar system.

  2. Robust Fault Tolerant Control for a Class of Time-Delay Systems with Multiple Disturbances

    Directory of Open Access Journals (Sweden)

    Songyin Cao

    2013-01-01

    Full Text Available A robust fault tolerant control (FTC approach is addressed for a class of nonlinear systems with time delay, actuator faults, and multiple disturbances. The first part of the multiple disturbances is supposed to be an uncertain modeled disturbance and the second one represents a norm-bounded variable. First, a composite observer is designed to estimate the uncertain modeled disturbance and actuator fault simultaneously. Then, an FTC strategy consisting of disturbance observer based control (DOBC, fault accommodation, and a mixed H2/H∞ controller is constructed to reconfigure the considered systems with disturbance rejection and attenuation performance. Finally, simulations for a flight control system are given to show the efficiency of the proposed approach.

  3. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  4. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; čizmar, D.

    2010-01-01

    The present paper outlines results from working group 3 (WG3) in the EU COST Action E55 – ‘Modelling of the performance of timber structures’. The objectives of the project are related to the three main research activities: the identification and modelling of relevant load and environmental...... exposure scenarios, the improvement of knowledge concerning the behaviour of timber structural elements and the development of a generic framework for the assessment of the life-cycle vulnerability and robustness of timber structures....

  5. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  6. Development of methods for evaluating active faults

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-08-15

    The HERP report for long-term evaluation of active faults and the NSC safety review guide with regard to geology and ground of site were published on Nov. 2010 and on Dec. 2010, respectively. With respect to those reports, our investigation is as follows; (1) For assessment of seismic hazard, we estimated seismic sources around NPPs based on information of tectonic geomorphology, earthquake distribution and subsurface geology. (2) For evaluation on the activity of blind fault, we calculated the slip rate on the 2008 Iwate-Miyagi Nairiku Earthquake fault, using information on late Quaternary fluvial terraces. (3) To evaluate the magnitude of earthquakes whose sources are difficult to identify, we proposed a new method for calculation of the seismogenic layer thickness. (4) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (5) For improving chronology of sediments, we detected new widespread cryptotephras using mineral chemistry and developed late Quaternary cryptotephrostratigraphy around NPPs. (author)

  7. Robust fault detection of turbofan engines subject to adaptive controllers via a Total Measurable Fault Information Residual (ToMFIR) technique.

    Science.gov (United States)

    Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping

    2014-09-01

    This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    Science.gov (United States)

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  9. Structural Robustness Evaluation of Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Giuliani, Luisa; Bontempi, Franco

    2010-01-01

    in the framework of a safe design: it depends on different factors, like exposure, vulnerability and robustness. Particularly, the requirement of structural vulnerability and robustness are discussed in this paper and a numerical application is presented, in order to evaluate the effects of a ship collision...

  10. Nonlinear Robust Observer-Based Fault Detection for Networked Suspension Control System of Maglev Train

    Directory of Open Access Journals (Sweden)

    Yun Li

    2013-01-01

    Full Text Available A fault detection approach based on nonlinear robust observer is designed for the networked suspension control system of Maglev train with random induced time delay. First, considering random bounded time-delay and external disturbance, the nonlinear model of the networked suspension control system is established. Then, a nonlinear robust observer is designed using the input of the suspension gap. And the estimate error is proved to be bounded with arbitrary precision by adopting an appropriate parameter. When sensor faults happen, the residual between the real states and the observer outputs indicates which kind of sensor failures occurs. Finally, simulation results using the actual parameters of CMS-04 maglev train indicate that the proposed method is effective for maglev train.

  11. Robust reconfigurable control for parametric and additive faults with FDI uncertainties

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Yang, Zhenyu

    2000-01-01

    From the system recoverable point of view, this paper discusses robust reconfigurable control synthesis for LTI systems and a class of nonlinear control systems with parametric and additive faults as well as derivations generated by FDI algorithms. By following the model-matching strategy......, an augmented optimal control problem is constructed based on the considered faulty and fictitious nominal systems, such that the robust control design techniques, such as H-infinity control and mu synthesis, can be employed for the reconfigurable control design....

  12. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  13. Neural network-based robust actuator fault diagnosis for a non-linear multi-tank system.

    Science.gov (United States)

    Mrugalski, Marcin; Luzar, Marcel; Pazera, Marcin; Witczak, Marcin; Aubrun, Christophe

    2016-03-01

    The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H∞ framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. UNIX-based operating systems robustness evaluation

    Science.gov (United States)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  15. Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Song

    2014-01-01

    Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.

  16. Robust adaptive fault-tolerant control for leader-follower flocking of uncertain multi-agent systems with actuator failure.

    Science.gov (United States)

    Yazdani, Sahar; Haeri, Mohammad

    2017-11-01

    In this work, we study the flocking problem of multi-agent systems with uncertain dynamics subject to actuator failure and external disturbances. By considering some standard assumptions, we propose a robust adaptive fault tolerant protocol for compensating of the actuator bias fault, the partial loss of actuator effectiveness fault, the model uncertainties, and external disturbances. Under the designed protocol, velocity convergence of agents to that of virtual leader is guaranteed while the connectivity preservation of network and collision avoidance among agents are ensured as well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Dependability evaluation of computing systems - physical faults, design faults, malicious faults

    International Nuclear Information System (INIS)

    Kaaniche, Mohamed

    1999-01-01

    The research summarized in this report focuses on the dependability of computer systems. It addresses several complementary, theoretical as well as experimental, issues that are grouped into four topics. The first topic concerns the definition of efficient methods that aim to assist the users in the construction and validation of complex dependability analysis and evaluation models. The second topic deals with the modeling of reliability and availability growth that mainly result from the progressive removal of design faults. A method is also defined to support the application of software reliability evaluation studies in an industrial context. The third topic deals with the development and experimentation of a new approach for the quantitative evaluation of operational security. This approach aims to assist the system administrators in the monitoring of operational security, when modifications, that are likely to introduce new vulnerabilities, occur in the system configuration, the applications, the user behavior, etc. Finally, the fourth topic addresses: a) the definition of a development model focused at the production of dependable systems, and b) the development of assessment criteria to obtain justified confidence that a system will achieve, during its operation and up to its decommissioning, its dependability objectives. (author) [fr

  18. Interim reliability evaluation program, Browns Ferry fault trees

    International Nuclear Information System (INIS)

    Stewart, M.E.

    1981-01-01

    An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible

  19. Fault diagnosis of locomotive electro-pneumatic brake through uncertain bond graph modeling and robust online monitoring

    Science.gov (United States)

    Niu, Gang; Zhao, Yajun; Defoort, Michael; Pecht, Michael

    2015-01-01

    To improve reliability, safety and efficiency, advanced methods of fault detection and diagnosis become increasingly important for many technical fields, especially for safety related complex systems like aircraft, trains, automobiles, power plants and chemical plants. This paper presents a robust fault detection and diagnostic scheme for a multi-energy domain system that integrates a model-based strategy for system fault modeling and a data-driven approach for online anomaly monitoring. The developed scheme uses LFT (linear fractional transformations)-based bond graph for physical parameter uncertainty modeling and fault simulation, and employs AAKR (auto-associative kernel regression)-based empirical estimation followed by SPRT (sequential probability ratio test)-based threshold monitoring to improve the accuracy of fault detection. Moreover, pre- and post-denoising processes are applied to eliminate the cumulative influence of parameter uncertainty and measurement uncertainty. The scheme is demonstrated on the main unit of a locomotive electro-pneumatic brake in a simulated experiment. The results show robust fault detection and diagnostic performance.

  20. Quantitative evaluation of fault coverage for digitalized systems in NPPs using simulated fault injection method

    International Nuclear Information System (INIS)

    Kim, Suk Joon

    2004-02-01

    Even though digital systems have numerous advantages such as precise processing of data, enhanced calculation capability over the conventional analog systems, there is a strong restriction on the application of digital systems to the safety systems in nuclear power plants (NPPs). This is because we do not fully understand the reliability of digital systems, and therefore we cannot guarantee the safety of digital systems. But, as the need for introduction of digital systems to safety systems in NPPs increasing, the need for the quantitative analysis on the safety of digital systems is also increasing. NPPs, which are quite conservative in terms of safety, require proving the reliability of digital systems when applied them to the NPPs. Moreover, digital systems which are applied to the NPPs are required to increase the overall safety of NPPs. however, it is very difficult to evaluate the reliability of digital systems because they include the complex fault processing mechanisms at various levels of the systems. Software is another obstacle in reliability assessment of the systems that requires ultra-high reliability. In this work, the fault detection coverage for the digital system is evaluated using simulated fault injection method. The target system is the Local Coincidence Logic (LCL) processor in Digital Plant Protection System (DPPS). However, as the LCL processor is difficult to design equally for evaluating the fault detection coverage, the LCL system has to be simplified. The simulations for evaluating the fault detection coverage of components are performed by dividing into two cases and the failure rates of components are evaluated using MIL-HDBK-217F. Using these results, the fault detection coverage of simplified LCL system is evaluated. In the experiments, heartbeat signals were just emitted at regular interval after executing logic without self-checking algorithm. When faults are injected into the simplified system, fault occurrence can be detected by

  1. Robust Sensor Faults Reconstruction for a Class of Uncertain Linear Systems Using a Sliding Mode Observer: An LMI Approach

    International Nuclear Information System (INIS)

    Iskander, Boulaabi; Faycal, Ben Hmida; Moncef, Gossa; Anis, Sellami

    2009-01-01

    This paper presents a design method of a Sliding Mode Observer (SMO) for robust sensor faults reconstruction of systems with matched uncertainty. This class of uncertainty requires a known upper bound. The basic idea is to use the H ∞ concept to design the observer, which minimizes the effect of the uncertainty on the reconstruction of the sensor faults. Specifically, we applied the equivalent output error injection concept from previous work in Fault Detection and Isolation (FDI) scheme. Then, these two problems of design and reconstruction can be expressed and numerically formulate via Linear Matrix Inequalities (LMIs) optimization. Finally, a numerical example is given to illustrate the validity and the applicability of the proposed approach.

  2. Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties

    Science.gov (United States)

    Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui

    2017-10-01

    In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.

  3. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  4. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  5. Technical Evaluation of Superconducting Fault Current Limiters Used in a Micro-Grid by Considering the Fault Characteristics of Distributed Generation, Energy Storage and Power Loads

    Directory of Open Access Journals (Sweden)

    Lei Chen

    2016-09-01

    Full Text Available Concerning the development of a micro-grid integrated with multiple intermittent renewable energy resources, one of the main issues is related to the improvement of its robustness against short-circuit faults. In a sense, the superconducting fault current limiter (SFCL can be regarded as a feasible approach to enhance the transient performance of a micro-grid under fault conditions. In this paper, the fault transient analysis of a micro-grid, including distributed generation, energy storage and power loads, is conducted, and regarding the application of one or more flux-coupling-type SFCLs in the micro-grid, an integrated technical evaluation method considering current-limiting performance, bus voltage stability and device cost is proposed. In order to assess the performance of the SFCLs and verify the effectiveness of the evaluation method, different fault cases of a 10-kV micro-grid with photovoltaic (PV, wind generator and energy storage are simulated in the MATLAB software. The results show that, the efficient use of the SFCLs for the micro-grid can contribute to reducing the fault current, improving the voltage sags and suppressing the frequency fluctuations. Moreover, there will be a compromise design to fully take advantage of the SFCL parameters, and thus, the transient performance of the micro-grid can be guaranteed.

  6. Fuzzy Uncertainty Evaluation for Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.

  7. New approaches to evaluating fault trees

    International Nuclear Information System (INIS)

    Sinnamon, R.M.; Andrews, J.D.

    1997-01-01

    Fault Tree Analysis is now a widely accepted technique to assess the probability and frequency of system failure in many industries. For complex systems an analysis may produce hundreds of thousands of combinations of events which can cause system failure (minimal cut sets). The determination of these cut sets can be a very time consuming process even on modern high speed digital computers. Computerised methods, such as bottom-up or top-down approaches, to conduct this analysis are now so well developed that further refinement is unlikely to result in vast reductions in computer time. It is felt that substantial improvement in computer utilisation will only result from a completely new approach. This paper describes the use of a Binary Decision Diagram for Fault Tree Analysis and some ways in which it can be efficiently implemented on a computer. In particular, attention is given to the production of a minimum form of the Binary Decision Diagram by considering the ordering that has to be given to the basic events of the fault tree

  8. Robust Fault Estimation Design for Discrete-Time Nonlinear Systems via A Modified Fuzzy Fault Estimation Observer.

    Science.gov (United States)

    Xie, Xiang-Peng; Yue, Dong; Park, Ju H

    2018-02-01

    The paper provides relaxed designs of fault estimation observer for nonlinear dynamical plants in the Takagi-Sugeno form. Compared with previous theoretical achievements, a modified version of fuzzy fault estimation observer is implemented with the aid of the so-called maximum-priority-based switching law. Given each activated switching status, the appropriate group of designed matrices can be provided so as to explore certain key properties of the considered plants by means of introducing a set of matrix-valued variables. Owing to the reason that more abundant information of the considered plants can be updated in due course and effectively exploited for each time instant, the conservatism of the obtained result is less than previous theoretical achievements and thus the main defect of those existing methods can be overcome to some extent in practice. Finally, comparative simulation studies on the classical nonlinear truck-trailer model are given to certify the benefits of the theoretic achievement which is obtained in our study. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Reliability and Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Cizmar, Dean; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning

    In the last few decades there have been intensely research concerning reliability of timber structures. This is primarily because there is an increased focus on society on sustainability and environmental aspects. Modern timber as a building material is also being competitive compared to concrete...... and steel. However, reliability models applied to timber were always related to individual components but not the systems. as any real structure is a complex system, system behaviour must be of a particular interest. In the chapter 1 of this document an overview of stochastic models for strength and loads...... (deterministic, probabilistic and risk based approaches) of the robustness are given. Chapter 3 deals more detailed with the robustness of timber structures....

  10. Development and Evaluation of Fault-Tolerant Flight Control Systems

    Science.gov (United States)

    Song, Yong D.; Gupta, Kajal (Technical Monitor)

    2004-01-01

    The research is concerned with developing a new approach to enhancing fault tolerance of flight control systems. The original motivation for fault-tolerant control comes from the need for safe operation of control elements (e.g. actuators) in the event of hardware failures in high reliability systems. One such example is modem space vehicle subjected to actuator/sensor impairments. A major task in flight control is to revise the control policy to balance impairment detectability and to achieve sufficient robustness. This involves careful selection of types and parameters of the controllers and the impairment detecting filters used. It also involves a decision, upon the identification of some failures, on whether and how a control reconfiguration should take place in order to maintain a certain system performance level. In this project new flight dynamic model under uncertain flight conditions is considered, in which the effects of both ramp and jump faults are reflected. Stabilization algorithms based on neural network and adaptive method are derived. The control algorithms are shown to be effective in dealing with uncertain dynamics due to external disturbances and unpredictable faults. The overall strategy is easy to set up and the computation involved is much less as compared with other strategies. Computer simulation software is developed. A serious of simulation studies have been conducted with varying flight conditions.

  11. Time-dependent methodology for fault tree evaluation

    International Nuclear Information System (INIS)

    Vesely, W.B.

    1976-01-01

    Any fault tree may be evaluated applying the method called the kinetic theory of fault trees. The basic feature of this method as presented here is in that any information on primary failure, type failure or peak failure is derived from three characteristics: probability of existence, failure intensity and failure density. The determination of the said three characteristics for a given phenomenon yields the remaining probabilistic information on the individual aspects of the failure and on their totality for the whole observed period. The probabilistic characteristics are determined by applying the analysis of phenomenon probability. The total time dependent information on the peak failure is obtained by using the type failures (critical paths) of the fault tree. By applying the said process the total time dependent information is obtained for every primary failure and type failure of the fault tree. In the application of the method of the kinetic theory of fault trees represented by the PREP and KITT programmes, the type failures are first obtained using the deterministic testing method or using the Monte Carlo simulation (PREP programme). The respective characteristics are then determined using the kinetic theory of fault trees (KITT programmes). (Oy)

  12. Creating Robust Evaluation of ATE Projects

    Science.gov (United States)

    Eddy, Pamela L.

    2017-01-01

    Funded grant projects all involve some form of evaluation, and Advanced Technological Education (ATE) grants are no exception. Program evaluation serves as a critical component not only for evaluating if a project has met its intended and desired outcomes, but the evaluation process is also a central feature of the grant application itself.…

  13. Robust and Agile System against Fault and Anomaly Traffic in Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Mihui Kim

    2017-03-01

    Full Text Available The main advantage of software defined networking (SDN is that it allows intelligent control and management of networking though programmability in real time. It enables efficient utilization of network resources through traffic engineering, and offers potential attack defense methods when abnormalities arise. However, previous studies have only identified individual solutions for respective problems, instead of finding a more global solution in real time that is capable of addressing multiple situations in network status. To cover diverse network conditions, this paper presents a comprehensive reactive system for simultaneously monitoring failures, anomalies, and attacks for high availability and reliability. We design three main modules in the SDN controller for a robust and agile defense (RAD system against network anomalies: a traffic analyzer, a traffic engineer, and a rule manager. RAD provides reactive flow rule generation to control traffic while detecting network failures, anomalies, high traffic volume (elephant flows, and attacks. The traffic analyzer identifies elephant flows, traffic anomalies, and attacks based on attack signatures and network monitoring. The traffic engineer module measures network utilization and delay in order to determine the best path for multi-dimensional routing and load balancing under any circumstances. Finally, the rule manager generates and installs a flow rule for the selected best path to control traffic. We implement the proposed RAD system based on Floodlight, an open source project for the SDN controller. We evaluate our system using simulation with and without the aforementioned RAD modules. Experimental results show that our approach is both practical and feasible, and can successfully augment an existing SDN controller in terms of agility, robustness, and efficiency, even in the face of link failures, attacks, and elephant flows.

  14. Active Disturbance Rejection Approach for Robust Fault-Tolerant Control via Observer Assisted Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    John Cortés-Romero

    2013-01-01

    Full Text Available This work proposes an active disturbance rejection approach for the establishment of a sliding mode control strategy in fault-tolerant operations. The core of the proposed active disturbance rejection assistance is a Generalized Proportional Integral (GPI observer which is in charge of the active estimation of lumped nonlinear endogenous and exogenous disturbance inputs related to the creation of local sliding regimes with limited control authority. Possibilities are explored for the GPI observer assisted sliding mode control in fault-tolerant schemes. Convincing improvements are presented with respect to classical sliding mode control strategies. As a collateral advantage, the observer-based control architecture offers the possibility of chattering reduction given that a significant part of the control signal is of the continuous type. The case study considers a classical DC motor control affected by actuator faults, parametric failures, and perturbations. Experimental results and comparisons with other established sliding mode controller design methodologies, which validate the proposed approach, are provided.

  15. Neural Networks and Fault Probability Evaluation for Diagnosis Issues

    Science.gov (United States)

    Lefebvre, Dimitri; Guersi, Noureddine

    2014-01-01

    This paper presents a new FDI technique for fault detection and isolation in unknown nonlinear systems. The objective of the research is to construct and analyze residuals by means of artificial intelligence and probabilistic methods. Artificial neural networks are first used for modeling issues. Neural networks models are designed for learning the fault-free and the faulty behaviors of the considered systems. Once the residuals generated, an evaluation using probabilistic criteria is applied to them to determine what is the most likely fault among a set of candidate faults. The study also includes a comparison between the contributions of these tools and their limitations, particularly through the establishment of quantitative indicators to assess their performance. According to the computation of a confidence factor, the proposed method is suitable to evaluate the reliability of the FDI decision. The approach is applied to detect and isolate 19 fault candidates in the DAMADICS benchmark. The results obtained with the proposed scheme are compared with the results obtained according to a usual thresholding method. PMID:25132845

  16. Object-oriented fault tree evaluation program for quantitative analyses

    Science.gov (United States)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  17. Degree of Fault Tolerance as a Comprehensive Parameter for Reliability Evaluation of Fault Tolerant Electric Traction Drives

    Directory of Open Access Journals (Sweden)

    Igor Bolvashenkov

    2016-09-01

    Full Text Available This paper describes a new approach and methodology of quantitative assessment of the fault tolerance of electric power drive consisting of the multi-phase traction electric motor and multilevel electric inverter. It is suggested to consider such traction drive as a system with several degraded states. As a comprehensive parameter for evaluating of the fault tolerance, it is proposed to use the criterion of degree of the fault tolerance. For the approbation of the proposed method, the authors carried out research and obtained results of its practical application for evaluating the fault tolerance of the power train of an electrical helicopter.

  18. Robustness evaluation of transactional audio watermarking systems

    Science.gov (United States)

    Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg

    2003-06-01

    Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.

  19. A Robust Interpretation of Teaching Evaluation Ratings

    Science.gov (United States)

    Bi, Henry H.

    2018-01-01

    There are no absolute standards regarding what teaching evaluation ratings are satisfactory. It is also problematic to compare teaching evaluation ratings with the average or with a cutoff number to determine whether they are adequate. In this paper, we use average and standard deviation charts (X[overbar]-S charts), which are based on the theory…

  20. A Benchmark Evaluation of Fault Tolerant Wind Turbine Control Concepts

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2015-01-01

    As the world’s power supply to a larger and larger degree depends on wind turbines, it is consequently and increasingly important that these are as reliable and available as possible. Modern fault tolerant control (FTC) could play a substantial part in increasing reliability of modern wind turbin...... accommodation is handled in software sensor and actuator blocks. This means that the wind turbine controller can continue operation as in the fault free case. The other two evaluated solutions show some potential but probably need improvements before industrial applications....

  1. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  2. WAMCUT, a computer code for fault tree evaluation. Final report

    International Nuclear Information System (INIS)

    Erdmann, R.C.

    1978-06-01

    WAMCUT is a code in the WAM family which produces the minimum cut sets (MCS) for a given fault tree. The MCS are useful as they provide a qualitative evaluation of a system, as well as providing a means of determining the probability distribution function for the top of the tree. The program is very efficient and will produce all the MCS in a very short computer time span. 22 figures, 4 tables

  3. Fault Length Vs Fault Displacement Evaluation In The Case Of Cerro Prieto Pull-Apart Basin (Baja California, Mexico) Subsidence

    Science.gov (United States)

    Glowacka, E.; Sarychikhina, O.; Nava Pichardo, F. A.; Farfan, F.; Garcia Arthur, M. A.; Orozco, L.; Brassea, J.

    2013-05-01

    The Cerro Prieto pull-apart basin is located in the southern part of San Andreas Fault system, and is characterized by high seismicity, recent volcanism, tectonic deformation and hydrothermal activity (Lomnitz et al, 1970; Elders et al., 1984; Suárez-Vidal et al., 2008). Since the Cerro Prieto geothermal field production started, in 1973, significant subsidence increase was observed (Glowacka and Nava, 1996, Glowacka et al., 1999), and a relation between fluid extraction rate and subsidence rate has been suggested (op. cit.). Analysis of existing deformation data (Glowacka et al., 1999, 2005, Sarychikhina 2011) points to the fact that, although the extraction changes influence the subsidence rate, the tectonic faults control the spatial extent of the observed subsidence. Tectonic faults act as water barriers in the direction perpendicular to the fault, and/or separate regions with different compaction, and as effect the significant part of the subsidence is released as vertical displacement on the ground surface along fault rupture. These faults ruptures cause damages to roads and irrigation canals and water leakage. Since 1996, a network of geotechnical instruments has operated in the Mexicali Valley, for continuous recording of deformation phenomena. To date, the network (REDECVAM: Mexicali Valley Crustal Strain Measurement Array) includes two crackmeters and eight tiltmeters installed on, or very close to, the main faults; all instruments have sampling intervals in the 1 to 20 minutes range. Additionally, there are benchmarks for measuring vertical fault displacements for which readings are recorded every 3 months. Since the crackmeter measures vertical displacement on the fault at one place only, the question appears: can we use the crackmeter data to evaluate how long is the lenth of the fractured fault, and how quickly it grows, so we can know where we can expect fractures in the canals or roads? We used the Wells and Coppersmith (1994) relations between

  4. Robust observer-based fault diagnosis for nonlinear systems using Matlab

    CERN Document Server

    Zhang, Jian; Nguang, Sing Kiong

    2016-01-01

    This book introduces several observer-based methods, including: • the sliding-mode observer • the adaptive observer • the unknown-input observer and • the descriptor observer method for the problem of fault detection, isolation and estimation, allowing readers to compare and contrast the different approaches. The authors present basic material on Lyapunov stability theory, H¥ control theory, sliding-mode control theory and linear matrix inequality problems in a self-contained and step-by-step manner. Detailed and rigorous mathematical proofs are provided for all the results developed in the text so that readers can quickly gain a good understanding of the material. MATLAB® and Simulink® codes for all the examples, which can be downloaded from http://extras.springer.com, enable students to follow the methods and illustrative examples easily. The systems used in the examples make the book highly relevant to real-world problems in industrial control engineering and include a seventh-order aircraft mod...

  5. Extreme temperature robust optical sensor designs and fault-tolerant signal processing

    Science.gov (United States)

    Riza, Nabeel Agha [Oviedo, FL; Perez, Frank [Tujunga, CA

    2012-01-17

    Silicon Carbide (SiC) probe designs for extreme temperature and pressure sensing uses a single crystal SiC optical chip encased in a sintered SiC material probe. The SiC chip may be protected for high temperature only use or exposed for both temperature and pressure sensing. Hybrid signal processing techniques allow fault-tolerant extreme temperature sensing. Wavelength peak-to-peak (or null-to-null) collective spectrum spread measurement to detect wavelength peak/null shift measurement forms a coarse-fine temperature measurement using broadband spectrum monitoring. The SiC probe frontend acts as a stable emissivity Black-body radiator and monitoring the shift in radiation spectrum enables a pyrometer. This application combines all-SiC pyrometry with thick SiC etalon laser interferometry within a free-spectral range to form a coarse-fine temperature measurement sensor. RF notch filtering techniques improve the sensitivity of the temperature measurement where fine spectral shift or spectrum measurements are needed to deduce temperature.

  6. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip☆

    Science.gov (United States)

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-01-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290

  7. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip.

    Science.gov (United States)

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-06-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.

  8. A methodology for the quantitative evaluation of NPP fault diagnostic systems' dynamic aspects

    International Nuclear Information System (INIS)

    Kim, J.H.; Seong, P.H.

    2000-01-01

    A fault diagnostic system (FDS) is an operator decision support system which is implemented both to increase NPP efficiency as well as to reduce human error and cognitive workload that may cause nuclear power plant (NPP) accidents. Evaluation is an indispensable activity in constructing a reliable FDS. We first define the dynamic aspects of fault diagnostic systems (FDSs) for evaluation in this work. The dynamic aspect is concerned with the way a FDS responds to input. Next, we present a hierarchical structure in the evaluation for the dynamic aspects of FDSs. Dynamic aspects include both what a FDS provides and how a FDS operates. We define the former as content and the latter as behavior. Content and behavior contain two elements and six elements in the lower hierarchies, respectively. Content is a criterion for evaluating the integrity of a FDS, the problem types which a FDS deals with, along with the level of information. Behavior contains robustness, understandability, timeliness, transparency, effectiveness, and communicativeness of FDSs. On the other hand, the static aspects are concerned with the hardware and the software of the system. For quantitative evaluation, the method used to gain and aggregate the priorities of the criteria in this work is the analytic hierarchy process (AHP). The criteria at the lowest level are quantified through simple numerical expressions and questionnaires developed in this work. these well describe the characteristics of the criteria and appropriately use subjective, empirical, and technical methods. Finally, in order to demonstrate the feasibility of our evaluation method, we have performed one case study for the fault diagnosis module of OASYS TM (On-Line Operator Aid SYStem for Nuclear Power Plant), which is an operator support system developed at the Korea Advanced Institute of Science and Technology (KAIST)

  9. A Probabilistic Approach for Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    of Structures and a probabilistic modelling of the timber material proposed in the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS). Due to the framework in the Danish Code the timber structure has to be evaluated with respect to the following criteria where at least one shall...... to criteria a) and b) the timber frame structure has one column with a reliability index a bit lower than an assumed target level. By removal three columns one by one no significant extensive failure of the entire structure or significant parts of it are obatined. Therefore the structure can be considered......A probabilistic based robustness analysis has been performed for a glulam frame structure supporting the roof over the main court in a Norwegian sports centre. The robustness analysis is based on the framework for robustness analysis introduced in the Danish Code of Practice for the Safety...

  10. Evaluation of Robust Estimators Applied to Fluorescence Assays

    Directory of Open Access Journals (Sweden)

    U. Ruotsalainen

    2007-12-01

    Full Text Available We evaluated standard robust methods in the estimation of fluorescence signal in novel assays used for determining the biomolecule concentrations. The objective was to obtain an accurate and reliable estimate using as few observations as possible by decreasing the influence of outliers. We assumed the true signals to have Gaussian distribution, while no assumptions about the outliers were made. The experimental results showed that arithmetic mean performs poorly even with the modest deviations. Further, the robust methods, especially the M-estimators, performed extremely well. The results proved that the use of robust methods is advantageous in the estimation problems where noise and deviations are significant, such as in biological and medical applications.

  11. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  12. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  13. A simulation training evaluation method for distribution network fault based on radar chart

    Directory of Open Access Journals (Sweden)

    Yuhang Xu

    2018-01-01

    Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.

  14. Performance Evaluation and Robustness Testing of Advanced Oscilloscope Triggering Schemes

    Directory of Open Access Journals (Sweden)

    Shakeb A. KHAN

    2010-01-01

    Full Text Available In this paper, performance and robustness of two advanced oscilloscope triggering schemes is evaluated. The problem of time period measurement of complex waveforms can be solved using the algorithms, which utilize the associative memory network based weighted hamming distance (Whd and autocorrelation based techniques. Robustness of both the advanced techniques, are then evaluated by simulated addition of random noise of different levels to complex test signals waveforms, and minimum value of Whd (Whd min and peak value of coefficient of correlation(COCmax are computed over 10000 cycles of the selected test waveforms. The distance between mean value of second lowest value of Whd and Whd min and distance between second highest value of coefficient of correlation (COC and COC max are used as parameters to analyze the robustness of considered techniques. From the results, it is found that both the techniques are capable of producing trigger pulses efficiently; but correlation based technique is found to be better from robustness point of view.

  15. Inelastic response evaluation of steel frame structure subjected to near-fault ground motions

    Energy Technology Data Exchange (ETDEWEB)

    Choi, In Kil; Kim, Hyung Kyu; Choun, Young Sun; Seo, Jeong Moon

    2004-04-01

    A survey on some of the Quaternary fault segments near the Korean nuclear power plants is ongoing. It is likely that these faults would be identified as active ones. If the faults are confirmed as active ones, it will be necessary to reevaluate the seismic safety of nuclear power plants located near the fault. This study was performed to acquire overall knowledge of near-fault ground motions and evaluate inealstic response characteristics of near-fault ground motions. Although Korean peninsular is not located in the strong earthquake region, it is necessary to evaluate seismic safety of NPP for the earthquakes occurred in near-fault area with characteristics different from that of general far-fault earthquakes in order to improve seismic safety of existing NPP structures and equipment. As a result, for the seismic safety evaluation of NPP structures and equipment considering near-fault effects, this report will give many valuable information. In order to improve seismic safety of NPP structures and equipment against near-fault ground motions, it is necessary to consider inelastic response characteristics of near-fault ground motions in current design code. Also in Korea where these studies are immature yet, in the future more works of near-fault earthquakes must be accomplished.

  16. Robustness in NAA evaluated by the Youden and Steiner test

    International Nuclear Information System (INIS)

    Bedregal, P.; Torres, B.; Ubillus, M.; Mendoza, P.; Montoya, E.

    2008-01-01

    The chemistry laboratory at the Peruvian Institute of Nuclear Energy (IPEN) has carried out a validation method for the samples of siliceous composition. At least seven variables affecting the robustness of the results were initially identified, which may interact simultaneously or individually. Conventional evaluation hereof would imply a massive number of analyses and a far more effective approach for assessment of the robustness for these effects was found in the Youden-Steiner test, which provides the necessary information by only eight analyses for each sample type. Three reference materials were used for evaluating the effects of variations in sample mass, irradiation duration, standard mass, neutron flux, decay time, counting time and counting distance. (author)

  17. Evaluation of influence of splay fault growth on groundwater flow around geological disposal system

    International Nuclear Information System (INIS)

    Takai, Shizuka; Takeda, Seiji; Sakai, Ryutaro; Shimada, Taro; Munakata, Masahiro; Tanaka, Tadao

    2017-01-01

    In geological disposal, the direct effect of active faults on geological repositories is avoided at the stage of site characterization, however, uncertainty remains for the avoidance of faults derived from active faults, which are concealed deep under the ground and are difficult to detect by site investigation. In this research, the influence of the growth of undetected splay faults on a natural barrier in a geological disposal system due to the future action of faults was evaluated. We investigated examples of splay faults in Japan and set conditions for the growth of splay faults. Furthermore, we assumed a disposal site composed of sedimentary rock and made a hydrogeological model of the growth of splay faults. We carried out groundwater flow analyses, changing parameters such as the location and depth of the repository and the growth velocity of splay faults. We carried out groundwater flow analyses, changing parameters such as the location and depth of the repository and the growth velocity of splay faults. The results indicate that the main flow path from the repository is changed into an upward flow along the splay fault due to its growth and that the average velocity to the ground surface becomes one or two orders of magnitude higher than that before its growth. The results also suggest that splay fault growth leads to the possibility of the downward flow of oxidizing groundwater from the ground surface area. (author)

  18. Quantitative evaluation of the fault tolerance of systems important to the safety of atomic power plants

    International Nuclear Information System (INIS)

    Malkin, S.D.; Sivokon, V.P.; Shmatkova, L.V.

    1989-01-01

    Fault tolerance is the property of a system to preserve its performance upon failures of its components. Thus, in nuclear-reactor technology one has only a qualitative evaluation of fault tolerance - the single-failure criterion, which does not enable one to compare and perform goal-directed design of fault-tolerant systems, and in the field of computer technology there are no generally accepted evaluations of fault tolerance that could be applied effectively to reactor systems. This paper considers alternative evaluations of fault tolerance and a method of comprehensive automated calculation of the reliability and fault tolerance of complex systems. The authors presented quantitative estimates of fault tolerance that develop the single-failure criterion. They have limiting processes that allow simple and graphical standardization. They worked out a method and a program for comprehensive calculation of the reliability and fault tolerance of systems of complex structure that are important to the safety of atomic power plants. The quantitative evaluation of the fault tolerance of these systems exhibits a degree of insensitivity to failures and shows to what extent their reliability is determined by a rigorously defined structure, and to what extent by the probabilistic reliability characteristics of the components. To increase safety, one must increase the fault tolerance of the most important systems of atomic power plants

  19. 14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fuel Tank System Fault Tolerance Evaluation..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation... certificates that may affect the airplane fuel tank system, for turbine-powered transport category airplanes...

  20. Robust fault detection and isolation technique for single-input/single-output closed-loop control systems that exhibit actuator and sensor faults

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Alavi, S. M. Mahdi; Hayes, M. J.

    2008-01-01

    An integrated quantitative feedback design and frequency-based fault detection and isolation (FDI) approach is presented for single-input/single-output systems. A novel design methodology, based on shaping the system frequency response, is proposed to generate an appropriate residual signal...

  1. Robust ray-tracing algorithms for interactive dose rate evaluation

    International Nuclear Information System (INIS)

    Perrotte, L.

    2011-01-01

    More than ever, it is essential today to develop simulation tools to rapidly evaluate the dose rate received by operators working on nuclear sites. In order to easily study numerous different scenarios of intervention, computation times of available softwares have to be all lowered. This mainly implies to accelerate the geometrical computations needed for the dose rate evaluation. These computations consist in finding and sorting the whole list of intersections between a big 3D scene and multiple groups of 'radiative' rays meeting at the point where the dose has to be measured. In order to perform all these computations in less than a second, we first propose a GPU algorithm that enables the efficient management of one big group of coherent rays. Then we present a modification of this algorithm that guarantees the robustness of the ray-triangle intersection tests through the elimination of the precision issues due to floating-point arithmetic. This modification does not require the definition of scene-dependent coefficients ('epsilon' style) and only implies a small loss of performance (less than 10%). Finally we propose an efficient strategy to handle multiple ray groups (corresponding to multiple radiative objects) which use the previous results.Thanks to these improvements, we are able to perform an interactive and robust dose rate evaluation on big 3D scenes: all of the intersections (more than 13 million) between 700 000 triangles and 12 groups of 100 000 rays each are found, sorted along each ray and transferred to the CPU in 470 milliseconds. (author) [fr

  2. Evaluation of Wind Farm Controller based Fault Detection and Isolation

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Shafiei, Seyed Ehsan

    2015-01-01

    detection and isolation and fault tolerant control has previously been proposed. Based on this model, and international competition on wind farm FDI was organized. The contributions were presented at the IFAC World Congress 2014. In this paper the top three contributions to this competition are shortly......In the process of lowering cost of energy of power generated by wind turbines, some focus has been drawn towards fault detection and isolation and as well as fault tolerant control of wind turbines with the purpose of increasing reliability and availability of the wind turbines. Most modern wind...

  3. Robust fault detection for the dynamics of high-speed train with multi-source finite frequency interference.

    Science.gov (United States)

    Bai, Weiqi; Dong, Hairong; Yao, Xiuming; Ning, Bin

    2018-04-01

    This paper proposes a composite fault detection scheme for the dynamics of high-speed train (HST), using an unknown input observer-like (UIO-like) fault detection filter, in the presence of wind gust and operating noises which are modeled as disturbance generated by exogenous system and unknown multi-source disturbance within finite frequency domain. Using system input and system output measurements, the fault detection filter is designed to generate the needed residual signals. In order to decouple disturbance from residual signals without truncating the influence of faults, this paper proposes a method to partition the disturbance into two parts. One subset of the disturbance does not appear in residual dynamics, and the influence of the other subset is constrained by H ∞ performance index in a finite frequency domain. A set of detection subspaces are defined, and every different fault is assigned to its own detection subspace to guarantee the residual signals are diagonally affected promptly by the faults. Simulations are conducted to demonstrate the effectiveness and merits of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Evaluation of digital fault-tolerant architectures for nuclear power plant control systems

    International Nuclear Information System (INIS)

    Battle, R.E.

    1990-01-01

    Four fault tolerant architectures were evaluated for their potential reliability in service as control systems of nuclear power plants. The reliability analyses showed that human- and software-related common cause failures and single points of failure in the output modules are dominant contributors to system unreliability. The four architectures are triple-modular-redundant (TMR), both synchronous and asynchronous, and also dual synchronous and asynchronous. The evaluation includes a review of design features, an analysis of the importance of coverage, and reliability analyses of fault tolerant systems. An advantage of fault-tolerant controllers over those not fault tolerant, is that fault-tolerant controllers continue to function after the occurrence of most single hardware faults. However, most fault-tolerant controllers have single hardware components that will cause system failure, almost all controllers have single points of failure in software, and all are subject to common cause failures. Reliability analyses based on data from several industries that have fault-tolerant controllers were used to estimate the mean-time-between-failures of fault-tolerant controllers and to predict those failures modes that may be important in nuclear power plants. 7 refs., 4 tabs

  5. A knowledge-based approach to the evaluation of fault trees

    International Nuclear Information System (INIS)

    Hwang, Yann-Jong; Chow, Louis R.; Huang, Henry C.

    1996-01-01

    A list of critical components is useful for determining the potential problems of a complex system. However, to find this list through evaluating the fault trees is expensive and time consuming. This paper intends to propose an integrated software program which consists of a fault tree constructor, a knowledge base, and an efficient algorithm for evaluating minimal cut sets of a large fault tree. The proposed algorithm uses the approaches of top-down heuristic searching and the probability-based truncation. That makes the evaluation of fault trees obviously efficient and provides critical components for solving the potential problems in complex systems. Finally, some practical fault trees are included to illustrate the results

  6. Performance Evaluation of Cloud Service Considering Fault Recovery

    Science.gov (United States)

    Yang, Bo; Tan, Feng; Dai, Yuan-Shun; Guo, Suchang

    In cloud computing, cloud service performance is an important issue. To improve cloud service reliability, fault recovery may be used. However, the use of fault recovery could have impact on the performance of cloud service. In this paper, we conduct a preliminary study on this issue. Cloud service performance is quantified by service response time, whose probability density function as well as the mean is derived.

  7. On-Line Fault Detection in Wind Turbine Transmission System using Adaptive Filter and Robust Statistical Features

    Directory of Open Access Journals (Sweden)

    Mark Frogley

    2013-01-01

    Full Text Available To reduce the maintenance cost, avoid catastrophic failure, and improve the wind transmission system reliability, online condition monitoring system is critical important. In the real applications, many rotating mechanical faults, such as bearing surface defect, gear tooth crack, chipped gear tooth and so on generate impulsive signals. When there are these types of faults developing inside rotating machinery, each time the rotating components pass over the damage point, an impact force could be generated. The impact force will cause a ringing of the support structure at the structural natural frequency. By effectively detecting those periodic impulse signals, one group of rotating machine faults could be detected and diagnosed. However, in real wind turbine operations, impulsive fault signals are usually relatively weak to the background noise and vibration signals generated from other healthy components, such as shaft, blades, gears and so on. Moreover, wind turbine transmission systems work under dynamic operating conditions. This will further increase the difficulties in fault detection and diagnostics. Therefore, developing advanced signal processing methods to enhance the impulsive signals is in great needs.In this paper, an adaptive filtering technique will be applied for enhancing the fault impulse signals-to-noise ratio in wind turbine gear transmission systems. Multiple statistical features designed to quantify the impulsive signals of the processed signal are extracted for bearing fault detection. The multiple dimensional features are then transformed into one dimensional feature. A minimum error rate classifier will be designed based on the compressed feature to identify the gear transmission system with defect. Real wind turbine vibration signals will be used to demonstrate the effectiveness of the presented methodology.

  8. Risk evaluation method for faults by engineering approach. (1) Nuclear safety for accident scenario and measures for fault movement

    International Nuclear Information System (INIS)

    Narabayashi, Tadashi; Chiba, Go; Okamoto, Koji; Kameda, Hiroyuki; Ebisawa, Katsumi; Yamazaki, Haruo; Konagai, Kazuo; Kamiya, Masanobu; Nagasawa, Kazuyuki

    2016-01-01

    Japan, as a frequent earthquake country, has a responsibility to resolve efficient measures to enhance nuclear safety, to continue utilizing the nuclear power, based on the risks and importance levels in the scientific and rational manner. In his paper describes how to evaluate the risk of faults movement by engineering approach. An open fruitful discussion by experts in the various area of earthquake, geology, geotechnical, civil, and a seismic design as well as other stakeholders such as academia professors, nuclear reactor engineers, regulators, and licensees. The Atomic Energy Society established an Investigation Committee on Development of Activity and Risk Evaluation Method for Faults by Engineering Approach (IC-DAREFEA) on October 1st, a 2014. The Investigation Committee utilizes the most advanced scientific and rational judgement, and continuous discussions and efforts in the global field, in order to collect and organize these knowledge and reflect the global standards and nuclear regulations, such as risk evaluation method for the faults movements and prevention of severe accidents, based on the accumulated database in the world, including Chuetsuoki Earthquake, North Nagano Earthquake and Kumamoto Earthquake. (author)

  9. A robust detector for rolling element bearing condition monitoring based on the modulation signal bispectrum and its performance evaluation against the Kurtogram

    Science.gov (United States)

    Tian, Xiange; Xi Gu, James; Rehab, Ibrahim; Abdalla, Gaballa M.; Gu, Fengshou; Ball, A. D.

    2018-02-01

    Envelope analysis is a widely used method for rolling element bearing fault detection. To obtain high detection accuracy, it is critical to determine an optimal frequency narrowband for the envelope demodulation. However, many of the schemes which are used for the narrowband selection, such as the Kurtogram, can produce poor detection results because they are sensitive to random noise and aperiodic impulses which normally occur in practical applications. To achieve the purposes of denoising and frequency band optimisation, this paper proposes a novel modulation signal bispectrum (MSB) based robust detector for bearing fault detection. Because of its inherent noise suppression capability, the MSB allows effective suppression of both stationary random noise and discrete aperiodic noise. The high magnitude features that result from the use of the MSB also enhance the modulation effects of a bearing fault and can be used to provide optimal frequency bands for fault detection. The Kurtogram is generally accepted as a powerful means of selecting the most appropriate frequency band for envelope analysis, and as such it has been used as the benchmark comparator for performance evaluation in this paper. Both simulated and experimental data analysis results show that the proposed method produces more accurate and robust detection results than Kurtogram based approaches for common bearing faults under a range of representative scenarios.

  10. Robustness Evaluation of Timber Structures with Ductile Behaviour

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Cizmar, D.

    2009-01-01

    Robustness of structural systems has received a renewed interest resulting from the more frequent use of advanced types of structures with limited redundancy and serious consequences in the case of failure.......Robustness of structural systems has received a renewed interest resulting from the more frequent use of advanced types of structures with limited redundancy and serious consequences in the case of failure....

  11. On the Generation of a Robust Residual for Closed-loopControl systems that Exhibit Sensor Faults

    DEFF Research Database (Denmark)

    Alavi, Seyed Mohammad Mahdi; Izadi-Zamanabadi, Roozbeh; Hayes, Martin J.

    2007-01-01

    This paper presents a novel design methodology, based on shaping the system frequency response, for the generation of an appropriate residual signal that is sensitive to sensor faults in the presence of model uncertainty and exogenous unknown (unmeasured) disturbances. An integrated feedback cont...

  12. ANCON: A code for the evaluation of complex fault trees in personal computers

    International Nuclear Information System (INIS)

    Napoles, J.G.; Salomon, J.; Rivero, J.

    1990-01-01

    Performing probabilistic safety analysis has been recognized worldwide as one of the more effective ways for further enhancing safety of Nuclear Power Plants. The evaluation of fault trees plays a fundamental role in these analysis. Some existing limitations in RAM and execution speed of personal computers (PC) has restricted so far their use in the analysis of complex fault trees. Starting from new approaches in the data structure and other possibilities the ANCON code can evaluate complex fault trees in a PC, allowing the user to do a more comprehensive analysis of the considered system in reduced computing time

  13. PL-MOD: a computer code for modular fault tree analysis and evaluation

    International Nuclear Information System (INIS)

    Olmos, J.; Wolf, L.

    1978-01-01

    The computer code PL-MOD has been developed to implement the modular methodology to fault tree analysis. In the modular approach, fault tree structures are characterized by recursively relating the top tree event to all basic event inputs through a set of equations, each defining an independent modular event for the tree. The advantages of tree modularization lie in that it is a more compact representation than the minimal cut-set description and in that it is well suited for fault tree quantification because of its recursive form. In its present version, PL-MOD modularizes fault trees and evaluates top and intermediate event failure probabilities, as well as basic component and modular event importance measures, in a very efficient way. Thus, its execution time for the modularization and quantification of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using the computer code MOCUS

  14. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    Science.gov (United States)

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. CPN based fault-tolerance performance evaluation of fieldbus for KNGR NPCS network

    International Nuclear Information System (INIS)

    Jung, Hyun Gi; Seong, Poong Hyun

    1998-01-01

    In contrast with conventional Fieldbus researches which are focused on real time performanc ignoring fault-tolerant mechanisms, the aim of this work is real-time performance evaluation of the system including fault. Because the communication network will be applied to Next Generation NPP, maintaining performance in presence of recoverable fault is important. To guarantee this in NPP Control Network, we should investigate the time characteristics of the target system in case of recoverable fault. If the time characteristics meet the requirements of the system, the faults will be recovered by Fieldbus recovery mechanisms and the system will be safe. But, if time characteristics can not meet the requirements, the faults in the Fieldbus can propagate to system failure. For this purpose, we classified the recoverable faults, made the formula which represents delays including recovery mechaisms and made simulation model. We appied the simulation model to KNGR NPCS with some assumptions. The outcome of the simulation is reallistic delays of the fault cases which have been classified. From the outcome of the simulation and the system requirements, we can calculate failure propagation probability from Fieldbus to outer system

  16. Fault evaluation and adaptive threshold detection of helicopter pilot ...

    African Journals Online (AJOL)

    Hitherto, in the field of aerospace science and industry, some acceptable results from control behavior of human operator (pilot), are caught using usual methods. However, very fewer research, has been done based on personal characteristics. The performed investigations, show that many of happened faults (especially in ...

  17. Evaluating the movement of active faults on buried pipelines | Parish ...

    African Journals Online (AJOL)

    During the earthquake, a buried pipeline may be experienced extreme loading that is the result of the relatively large displacement of the Earth along the pipe. Large movements of ground could occur by faulting, liquefaction, lateral spreading, landslides, and slope failures. Since the pipelines are widely spread, and in ...

  18. Design & Evaluation of a Protection Algorithm for a Wind Turbine Generator based on the fault-generated Symmetrical Components

    DEFF Research Database (Denmark)

    Zheng, T. Y.; Cha, Seung-Tae; Lee, B. E.

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on the fault-generated symmetrical components is proposed in the paper. At stage 1, the relay uses the magnitude of the positive-sequence component in the fault current to distinguish faults on a parallel WTG, connected to the same feeder......, or on an adjacent feeder from those on the connected feeder, on the collection bus, at an inter-tie or at a grid. For the former faults, the relay should remain stable and inoperative whilst the instantaneous or delayed tripping is required for the latter faults. At stage 2, the fault type is first evaluated using...... the relationships of the fault-generated symmetrical components. Then, the magnitude of the positive-sequence component in the fault current is used again to decide on either instantaneous or delayed operation. The operating performance of the relay is then verified using various fault scenarios modelled using...

  19. Fault Severity Evaluation and Improvement Design for Mechanical Systems Using the Fault Injection Technique and Gini Concordance Measure

    Directory of Open Access Journals (Sweden)

    Jianing Wu

    2014-01-01

    Full Text Available A new fault injection and Gini concordance based method has been developed for fault severity analysis for multibody mechanical systems concerning their dynamic properties. The fault tree analysis (FTA is employed to roughly identify the faults needed to be considered. According to constitution of the mechanical system, the dynamic properties can be achieved by solving the equations that include many types of faults which are injected by using the fault injection technique. Then, the Gini concordance is used to measure the correspondence between the performance with faults and under normal operation thereby providing useful hints of severity ranking in subsystems for reliability design. One numerical example and a series of experiments are provided to illustrate the application of the new method. The results indicate that the proposed method can accurately model the faults and receive the correct information of fault severity. Some strategies are also proposed for reliability improvement of the spacecraft solar array.

  20. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    Science.gov (United States)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  1. Evaluation of digital fault-tolerant architectures for nuclear power plant control systems

    International Nuclear Information System (INIS)

    Battle, R.E.

    1990-01-01

    This paper reports on four fault-tolerant architectures that were evaluated for their potential reliability in service as control systems of nuclear power plants. The reliability analyses showed that human- and software-related common cause failures and single points of failure in the output modules are dominant contributors to system unreliability. The four architectures are triple-modular-redundant, both synchronous and asynchronous, and also dual synchronous and asynchronous. The evaluation includes a review of design features, an analysis of the importance of coverage, and reliability analyses of fault-tolerant systems. Reliability analyses based on data from several industries that have fault-tolerant controllers were used to estimate the mean-time-between-failures of fault-tolerant controllers and to predict those failure modes that may be important in nuclear power plants

  2. Fault-tolerance performance evaluation of fieldbus for NPCS network of KNGR

    International Nuclear Information System (INIS)

    Jung, Hyun Gi

    1999-02-01

    In contrast with conventional fieldbus researches which are focused merely on real time performance, this study aims to evaluate the real-time performance of the communication system including fault-tolerant mechanisms. Maintaining performance in presence of recoverable faults is very important because the communication network will be applied to next generation NPP(Nuclear Power Plant). In order to guarantee the performance of NPP communication network, the time characteristics of the target system in presence of recoverable fault should be investigated. If the time characteristics meet the requirements of the system, the faults will be recovered by fieldbus recovery mechanisms and the system will be safe. If the time characteristics can not meet the requirements, the faults in the fieldbus can propagate to system failure. In this study, for the purpose of investigating the time characteristics of fieldbus, the recoverable faults are classified and then the formulas which represent delays including recovery mechanisms and the simulation model are developed. In order to validate the proposed approach, the simulation model is applied to the Korea Next Generation Reactor (KNGR) NSSS Process Control System (NPCS). The results of the simulation provide reasonable delay characteristics of the fault cases with recovery mechanisms. Using the outcome of the simulation and the system requirements, we also can calculate the failure propagation probability from fieldbus to outer system

  3. Investigation and evaluation of some prospected fault activities in Western Damascus

    International Nuclear Information System (INIS)

    Abdul-Wahed, M. Kh.; Al-Hilal, M.; Al-Ali, A.; Al-Najjar, H.

    2010-08-01

    The Atomic Energy Commission of Syria is interested in conducting researches about the possibility of mitigating seismic hazards especially in certain areas close to the Dead Sea Fault System (DSFS) in western Damascus. Recent data obtained from drilled wells in Dobaya and Sojja sites have shown preliminary indications of existing probable subsurface faults in the concerned area. Radon measurements in soil gas and water accompanied with seismic data are recognized as effective methods for providing valuable information about determining the locations of some seismogenic faults and evaluating their activities. This research aims at the mitigation of natural hazards such as earthquakes which may occur along some active branches of the Dead Sea Fault System in the area, by using radon monitoring technique and seismic data, in order to face such disasters which affect not only humans but also national economies (Author)

  4. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    Science.gov (United States)

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  5. Research on evaluation of degree of complexity of mining fault network based on GIS

    Energy Technology Data Exchange (ETDEWEB)

    Hua Zhang; Yun-jia Wang; Chuan-zhi Liu [China University of Mining and Technology, Jiangsu (China). School of Environment Science and Spatial Informatics

    2007-03-15

    A large number of spatial and attribute data are involved in coal resource evaluation. Databases are a relatively advanced data management technology, but their major defects are the poor graphic and spatial data functions, from which it is difficult to realize scientific management of evaluation data with spatial characteristics and evaluation result maps. On account of these deficiencies, the evaluation of degree of complexity of mining fault network based on a geographic information system (GIS) is proposed which integrates management of spatial and attribute data. A fractal is an index which can reflect the comprehensive information of faults' number, density, size, composition and dynamics mechanism. A fractal dimension is used as the quantitative evaluation index. Evaluation software has been developed based on a component GIS-MapX, with which the degree of complexity of fault network is evaluated quantitatively using the quantitative index of fractal dimensions in Liuqiao No.2 coal mine as an example. Results show that it is effective in acquiring model parameters and enhancing the definition of data and evaluation results with the application of GIS technology. The fault network is a system with fractal structure and its complexity can be described reasonably and accurately by fractal dimension, which provides an effective method for coal resource evaluation. 9 refs., 6 figs., 2 tabs.

  6. An evaluation method of fault-tolerance for digital plant protection system in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Jun Seok; Kim, Man Cheol; Seong, Poong Hyun; Kang, Hyun Gook; Jang, Seung Cheol

    2005-01-01

    In recent years, analog based nuclear power plant (NPP) safety related instrumentation and control (I and C) systems have been replaced to modern digital based I and C systems. NPP safety related I and C systems require very high design reliability compare to the conventional digital systems so that reliability assessment is very important. In the reliability assessment of the digital system, fault tolerance evaluation is one of the crucial factors. However, the evaluation is very difficult because the digital system in NPP is very complex. In this paper, the simulation based fault injection technique on simplified processor is used to evaluate the fault-tolerance of the digital plant protection system (DPPS) with high efficiency with low cost

  7. Fault diagnosis and performance evaluation for high current LIA based on radial basis function neural network

    International Nuclear Information System (INIS)

    Yang Xinglin; Wang Huacen; Chen Nan; Dai Wenhua; Li Jin

    2006-01-01

    High current linear induction accelerator (LIA) is a complicated experimental physics device. It is difficult to evaluate and predict its performance. this paper presents a method which combines wavelet packet transform and radial basis function (RBF) neural network to build fault diagnosis and performance evaluation in order to improve reliability of high current LIA. The signal characteristics vectors which are extracted based on energy parameters of wavelet packet transform can well present the temporal and steady features of pulsed power signal, and reduce data dimensions effectively. The fault diagnosis system for accelerating cell and the trend classification system for the beam current based on RBF networks can perform fault diagnosis and evaluation, and provide predictive information for precise maintenance of high current LIA. (authors)

  8. Evaluation of Transition Untestable Faults Using a Multi-Cycle Capture Test Generation Method

    OpenAIRE

    Yoshimura, Masayoshi; Ogawa, Hiroshi; Hosokawa, Toshinori; Yamazaki, Koji

    2010-01-01

    Overtesting induces unnecessary yield loss. Untestable faults have no effect on normal functions of circuits. However, in scan testing, untestable faults may be detected through scan chains. Detected untestable faults cause overtesting. Untestable faults consist of uncontrollable faults, unobservable faults, and uncontrollable and unobservable faults. Uncontrollable faults may be detected under invalid states through scan chains by shift-in operations. Unobservable faults cannot be observed ...

  9. Evaluation of the potential for surface faulting at TA-63. Final report

    International Nuclear Information System (INIS)

    Kolbe, T.; Sawyer, J.; Springer, J.; Olig, S.; Hemphill-Haley, M.; Wong, I.; Reneau, S.

    1995-01-01

    This report describes an investigation of the potential for surface faulting at the proposed sites for the Radioactive Liquid Waste Treatment Facility (RL)WF) and the Hazardous Waste Treatment Facility at TA-63 and TA-52 (hereafter TA-63), Los Alamos National Laboratory (LANL). This study was performed by Woodward-Clyde Federal Services (WCFS) at the request of the LANL. The projections of both the Guaje Mountain and Rendija Canyon faults are mapped in the vicinity of TA-63. Based on results obtained in the ongoing Seismic Hazard Evaluation Program of the LANL, displacement may have occurred on both the Guaje Mountain and Rendija Canyon faults in the past 11,000 years (Holocene time). Thus, in accordance with US Department of Energy (DOE) Orders and Standards for seismic hazards evaluations and the US Environmental Protection Agency (EPA) Resource Conservation and Recovery Act (RCRA) Regulations for seismic standard requirements, a geologic study of the proposed TA-63 site was conducted

  10. Application of Anisotropy of Magnetic Susceptibility to large-scale fault kinematics: an evaluation

    Science.gov (United States)

    Casas, Antonio M.; Roman-Berdiel, Teresa; Marcén, Marcos; Oliva-Urcia, Belen; Soto, Ruth; Garcia-Lasanta, Cristina; Calvin, Pablo; Pocovi, Andres; Gil-Imaz, Andres; Pueyo-Anchuela, Oscar; Izquierdo-Llavall, Esther; Vernet, Eva; Santolaria, Pablo; Osacar, Cinta; Santanach, Pere; Corrado, Sveva; Invernizzi, Chiara; Aldega, Luca; Caricchi, Chiara; Villalain, Juan Jose

    2017-04-01

    be observed within the same fault zone, depending on the proximity to the core zone. The transition between them is usually defined by oblate fabrics, with the long and intermediate axes contained within the main foliation plane in SC-like structures. The faults studied in this work are located in Northeast Iberia; most of them were formed during the Late-Variscan fracturing stage and constitute first-order structures controlling the Mesozoic and Cenozoic evolution of the Iberian plate. They include (i) large-scale (Cameros-Demanda) and plurikilometric (Monroyo, Rastraculos), thrusts resulting from basement thrusting and Mesozoic basin inversion, and (ii) strike-slip to transpressional structures in the Iberian Chain (Río Grío and Daroca faults, Aragonian Branch) and the Catalonian Range (Vallès fault). Application of AMS in combination with structural analysis has allowed us a deeper approach into the kinematics of these fault zones and namely to (i) accurately define the transport direction of Cenozoic thrusts (NNW to NE-SW for the studied E-W segments) and the flow directions of décollements and to evaluate the representativity of small-scale structures linked to thrusting; (ii) to assess the transpressional character of deformation for the main NW-SE and NE-SW Late-Variscan faults in NE Iberia during the Cenozoic (horizontal to intermediate-plunging transport directions) and (iii) to define the strain partitioning between different thrust sheets and strike-slip faults to finally establish the pattern of displacements in this intra-plate setting.

  11. Highly scalable and robust rule learner: performance evaluation and comparison.

    Science.gov (United States)

    Kurgan, Lukasz A; Cios, Krzysztof J; Dick, Scott

    2006-02-01

    Business intelligence and bioinformatics applications increasingly require the mining of datasets consisting of millions of data points, or crafting real-time enterprise-level decision support systems for large corporations and drug companies. In all cases, there needs to be an underlying data mining system, and this mining system must be highly scalable. To this end, we describe a new rule learner called DataSqueezer. The learner belongs to the family of inductive supervised rule extraction algorithms. DataSqueezer is a simple, greedy, rule builder that generates a set of production rules from labeled input data. In spite of its relative simplicity, DataSqueezer is a very effective learner. The rules generated by the algorithm are compact, comprehensible, and have accuracy comparable to rules generated by other state-of-the-art rule extraction algorithms. The main advantages of DataSqueezer are very high efficiency, and missing data resistance. DataSqueezer exhibits log-linear asymptotic complexity with the number of training examples, and it is faster than other state-of-the-art rule learners. The learner is also robust to large quantities of missing data, as verified by extensive experimental comparison with the other learners. DataSqueezer is thus well suited to modern data mining and business intelligence tasks, which commonly involve huge datasets with a large fraction of missing data.

  12. Fault Transient Analysis and Protection Performance Evaluation within a Large-scale PV Power Plant

    Directory of Open Access Journals (Sweden)

    Wen Jinghua

    2016-01-01

    Full Text Available In this paper, a short-circuit test within a large-scale PV power plant with a total capacity of 850MWp is discussed. The fault currents supplied by the PV generation units are presented and analysed. According to the fault behaviour, the existing protection coordination principles with the plant are considered and their performances are evaluated. Moreover, these protections are examined in simulation platform under different operating situations. A simple measure with communication system is proposed to deal with the foreseeable problem about the current protection scheme in the PV power plant.

  13. An Evaluation of Fault Tolerant Wind Turbine Control Schemes applied to a Benchmark Model

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2014-01-01

    Reliability and availability of modern wind turbines increases in importance as the ratio in the world's power supply increases. This is important in order to increase the energy generated per unit and their lowering cost of energy and as well to ensure availability of generated power, which helps...... on this benchmark and is especially good accommodating sensors faults. The two other evaluated solutions do also well accommodating sensors faults, but have some issues which should be worked on, before they can be considered as a full solution to the benchmark problem....

  14. Fault fracture zone evaluation using borehole geophysical logs; case study at Nojima fault, Awaji island; Kosei butsuri kenso ni yoru danso hasaitai no hyoka

    Energy Technology Data Exchange (ETDEWEB)

    Ikeda, R; Omura, K [National Research Institute for Disaster Prevention, Tsukuba (Japan); Yamamoto, T [Geophysical Surveying and Consulting Co. Ltd., Tokyo (Japan)

    1997-10-22

    Ikeda, et al., in their examination of log data obtained from a borehole (2000m deep) drilled at Ashio, Tochigi Prefecture, where micro-earthquakes swarm at very shallow levels, pay special attention to porosity. Using correlationship between the porosity and elastic wave velocity/resistivity, the authors endeavor to find the presence of secondary pores, dimensions of faults, composition of water in strata in faults, and difference in matrix between rocks, all these for the classification and evaluation of fault fracture zones. In the present report, log data from a borehole (1800m deep) drilled to penetrate the Nojima fault (Nojima-Hirabayashi, Awaji island) that emerged during the Great Hanshin-Himeji Earthquake are analyzed in the same way as the above-named Ashio data, and the results are compared with the Ashio results. Immediately below the Nojima-Hirabayashi fault fractured zone, stress is found remarkably reduced and the difference stress quite small in size. This is interpreted as indicating a state in which clay has already developed well in the fault fractured zone ready to allow the occurrence of shear fracture or a state in which such has already occurred for the release of stress. 4 refs., 5 figs.

  15. Southern San Andreas Fault evaluation field activity: approaches to measuring small geomorphic offsets--challenges and recommendations for active fault studies

    Science.gov (United States)

    Scharer, Katherine M.; Salisbury, J. Barrett; Arrowsmith, J. Ramon; Rockwell, Thomas K.

    2014-01-01

    In southern California, where fast slip rates and sparse vegetation contribute to crisp expression of faults and microtopography, field and high‐resolution topographic data (fault, analyze the offset values for concentrations or trends along strike, and infer that the common magnitudes reflect successive surface‐rupturing earthquakes along that fault section. Wallace (1968) introduced the use of such offsets, and the challenges in interpreting their “unique complex history” with offsets on the Carrizo section of the San Andreas fault; these were more fully mapped by Sieh (1978) and followed by similar field studies along other faults (e.g., Lindvall et al., 1989; McGill and Sieh, 1991). Results from such compilations spurred the development of classic fault behavior models, notably the characteristic earthquake and slip‐patch models, and thus constitute an important component of the long‐standing contrast between magnitude–frequency models (Schwartz and Coppersmith, 1984; Sieh, 1996; Hecker et al., 2013). The proliferation of offset datasets has led earthquake geologists to examine the methods and approaches for measuring these offsets, uncertainties associated with measurement of such features, and quality ranking schemes (Arrowsmith and Rockwell, 2012; Salisbury, Arrowsmith, et al., 2012; Gold et al., 2013; Madden et al., 2013). In light of this, the Southern San Andreas Fault Evaluation (SoSAFE) project at the Southern California Earthquake Center (SCEC) organized a combined field activity and workshop (the “Fieldshop”) to measure offsets, compare techniques, and explore differences in interpretation. A thorough analysis of the measurements from the field activity will be provided separately; this paper discusses the complications presented by such offset measurements using two channels from the San Andreas fault as illustrative cases. We conclude with best approaches for future data collection efforts based on input from the Fieldshop.

  16. Evaluation of the location and recency of faulting near prospective surface facilities in Midway Valley, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2002-01-17

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  17. Evaluation of the location and recency of faulting near prospective surface facilities in Midway Valley, Nye County, Nevada

    International Nuclear Information System (INIS)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2002-01-01

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  18. Evaluation of the Location and Recency of Faulting Near Prospective Surface Facilities in Midway Valley, Nye County, Nevada

    Science.gov (United States)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2001-01-01

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  19. Reliability evaluation of nuclear power plants by fault tree analysis

    International Nuclear Information System (INIS)

    Iwao, H.; Otsuka, T.; Fujita, I.

    1993-01-01

    As a work sponsored by the Ministry of International Trade and Industry, the Safety Information Research Center of NUPEC, using reliability data based on the operational experience of the domestic LWR Plants, has implemented FTA for the standard PWRs and BWRs in Japan with reactor scram due to system failures being at the top event. Up to this point, we have obtained the FT chart and minimal cut set for each type of system failure for qualitative evaluation, and we have estimated system unavailability, Fussell-Vesely importance and risk worth for components for quantitative evaluation. As the second stage of a series in our reliability evaluation work, another program was started to establish a support system. The aim of this system is to assist foreign and domestic plants in creating countermeasures when incidents occur, by providing them with the necessary information using the above analytical method and its results. (author)

  20. Evaluating failure rate of fault-tolerant multistage interconnection networks using Weibull life distribution

    International Nuclear Information System (INIS)

    Bistouni, Fathollah; Jahanshahi, Mohsen

    2015-01-01

    Fault-tolerant multistage interconnection networks (MINs) play a vital role in the performance of multiprocessor systems where reliability evaluation becomes one of the main concerns in analyzing these networks properly. In many cases, the primary objective in system reliability analysis is to compute a failure distribution of the entire system according to that of its components. However, since the problem is known to be NP-hard, in none of the previous efforts, the precise evaluation of the system failure rate has been performed. Therefore, our goal is to investigate this parameter for different fault-tolerant MINs using Weibull life distribution that is one of the most commonly used distributions in reliability. In this paper, four important groups of fault-tolerant MINs will be examined to find the best fault-tolerance techniques in terms of failure rate; (1) Extra-stage MINs, (2) Parallel MINs, (3) Rearrangeable non-blocking MINs, and (4) Replicated MINs. This paper comprehensively analyzes all perspectives of the reliability (terminal, broadcast, and network reliability). Moreover, in this study, all reliability equations are calculated for different network sizes. - Highlights: • The failure rate of different MINs is analyzed by using Weibull life distribution. • This article tries to find the best fault-tolerance technique in the field of MINs. • Complex series-parallel RBDs are used to determine the reliability of the MINs. • All aspects of the reliability (i.e. terminal, broadcast, and network) are analyzed. • All reliability equations will be calculated for different size N×N.

  1. Criteria for evaluating protection from single points of failure for partially expanded fault trees

    International Nuclear Information System (INIS)

    Aswani, D.; Badreddine, B.; Malone, M.; Gauthier, G.; Proietty, J.

    2008-01-01

    Fault tree analysis (FTA) is a technique that describes the combinations of events in a system which result in an undesirable outcome. FTA is used as a tool to quantitatively assess a system's probability for an undesirable outcome. Time constraints from concept to production in modern engineering often limit the opportunity for a thorough statistical analysis of a system. Furthermore, when undesirable outcomes are considered such as hazard to human(s), it becomes difficult to identify strict statistical targets for what is acceptable. Consequently, when hazard to human(s) is concerned a common design target is to protect the system from single points of failure (SPOF) which means that no failure mode caused by a single event, concern, or error has a critical consequence on the system. Such a design target is common with 'by-wire' systems. FTA can be used to verify if a system is protected from SPOF. In this paper, sufficient criteria for evaluating protection from SPOF for partially expanded fault trees are proposed along with proof. The proposed criteria consider potential interactions between the lowest drawn events of a partial fault tree expansion which otherwise easily leads to an overly optimistic analysis of protection from SPOF. The analysis is limited to fault trees that are coherent and static

  2. STEM - software test and evaluation methods: fault detection using static analysis techniques

    International Nuclear Information System (INIS)

    Bishop, P.G.; Esp, D.G.

    1988-08-01

    STEM is a software reliability project with the objective of evaluating a number of fault detection and fault estimation methods which can be applied to high integrity software. This Report gives some interim results of applying both manual and computer-based static analysis techniques, in particular SPADE, to an early CERL version of the PODS software containing known faults. The main results of this study are that: The scope for thorough verification is determined by the quality of the design documentation; documentation defects become especially apparent when verification is attempted. For well-defined software, the thoroughness of SPADE-assisted verification for detecting a large class of faults was successfully demonstrated. For imprecisely-defined software (not recommended for high-integrity systems) the use of tools such as SPADE is difficult and inappropriate. Analysis and verification tools are helpful, through their reliability and thoroughness. However, they are designed to assist, not replace, a human in validating software. Manual inspection can still reveal errors (such as errors in specification and errors of transcription of systems constants) which current tools cannot detect. There is a need for tools to automatically detect typographical errors in system constants, for example by reporting outliers to patterns. To obtain the maximum benefit from advanced tools, they should be applied during software development (when verification problems can be detected and corrected) rather than retrospectively. (author)

  3. A critical evaluation of crustal dehydration as the cause of an overpressured and weak San Andreas Fault

    Science.gov (United States)

    Fulton, P.M.; Saffer, D.M.; Bekins, B.A.

    2009-01-01

    Many plate boundary faults, including the San Andreas Fault, appear to slip at unexpectedly low shear stress. One long-standing explanation for a "weak" San Andreas Fault is that fluid release by dehydration reactions during regional metamorphism generates elevated fluid pressures that are localized within the fault, reducing the effective normal stress. We evaluate this hypothesis by calculating realistic fluid production rates for the San Andreas Fault system, and incorporating them into 2-D fluid flow models. Our results show that for a wide range of permeability distributions, fluid sources from crustal dehydration are too small and short-lived to generate, sustain, or localize fluid pressures in the fault sufficient to explain its apparent mechanical weakness. This suggests that alternative mechanisms, possibly acting locally within the fault zone, such as shear compaction or thermal pressurization, may be necessary to explain a weak San Andreas Fault. More generally, our results demonstrate the difficulty of localizing large fluid pressures generated by regional processes within near-vertical fault zones. ?? 2009 Elsevier B.V.

  4. Risk evaluation method for faults by engineering approach. (2) Application concept of margin analysis utilizing accident sequences

    International Nuclear Information System (INIS)

    Kamiya, Masanobu; Kanaida, Syuuji; Kamiya, Kouichi; Sato, Kunihiko; Kuroiwa, Katsuya

    2016-01-01

    The influence of the fault displacement on the facility should to be evaluated not only by the activity of the fault but also by obtaining risk information by considering scenarios including such as the frequency and the degree of the hazard, which should be an appropriate approach for nuclear safety. An applicable concept of margin analysis utilizing accident sequences for evaluating the influence of the fault displacement is proposed. By use of this analysis, we can evaluate of the safety functions and margin for core damage, verify the efficiency of equipment of portable type and make a decision to take additional measures to reduce the risk by using obtained risk information. (author)

  5. Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.

    Science.gov (United States)

    Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.

  6. Evaluation of fault-normal/fault-parallel directions rotated ground motions for response history analysis of an instrumented six-story building

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2012-01-01

    According to regulatory building codes in United States (for example, 2010 California Building Code), at least two horizontal ground-motion components are required for three-dimensional (3D) response history analysis (RHA) of buildings. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHA analyses should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak responses of engineering demand parameters (EDPs) were obtained for rotation angles ranging from 0° through 180° for evaluating the FN/FP directions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  7. Evaluation of MEMS-Based Wireless Accelerometer Sensors in Detecting Gear Tooth Faults in Helicopter Transmissions

    Science.gov (United States)

    Lewicki, David George; Lambert, Nicholas A.; Wagoner, Robert S.

    2015-01-01

    The diagnostics capability of micro-electro-mechanical systems (MEMS) based rotating accelerometer sensors in detecting gear tooth crack failures in helicopter main-rotor transmissions was evaluated. MEMS sensors were installed on a pre-notched OH-58C spiral-bevel pinion gear. Endurance tests were performed and the gear was run to tooth fracture failure. Results from the MEMS sensor were compared to conventional accelerometers mounted on the transmission housing. Most of the four stationary accelerometers mounted on the gear box housing and most of the CI's used gave indications of failure at the end of the test. The MEMS system performed well and lasted the entire test. All MEMS accelerometers gave an indication of failure at the end of the test. The MEMS systems performed as well, if not better, than the stationary accelerometers mounted on the gear box housing with regards to gear tooth fault detection. For both the MEMS sensors and stationary sensors, the fault detection time was not much sooner than the actual tooth fracture time. The MEMS sensor spectrum data showed large first order shaft frequency sidebands due to the measurement rotating frame of reference. The method of constructing a pseudo tach signal from periodic characteristics of the vibration data was successful in deriving a TSA signal without an actual tach and proved as an effective way to improve fault detection for the MEMS.

  8. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  9. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  10. An integrated methodology for the dynamic performance and reliability evaluation of fault-tolerant systems

    International Nuclear Information System (INIS)

    Dominguez-Garcia, Alejandro D.; Kassakian, John G.; Schindall, Joel E.; Zinchuk, Jeffrey J.

    2008-01-01

    We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft

  11. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    Science.gov (United States)

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD

  12. Towards understanding the robustness of energy distribution networks based on macroscopic and microscopic evaluations

    International Nuclear Information System (INIS)

    Liu Jiming; Shi Benyun

    2012-01-01

    Supply disruptions on one node of a distribution network may spread to other nodes, and potentially bring various social and economic impacts. To understand the performance of a distribution network in the face of supply disruptions, it would be helpful for policy makers to quantitatively evaluate the robustness of the network, i.e., its ability of maintaining a supply–demand balance on individual nodes. In this paper, we first define a notion of network entropy to macroscopically characterize distribution robustness with respect to the dynamics of energy flows. Further, we look into how microscopic evaluation based on a failure spreading model helps us determine the extent to which disruptions on one node may affect the others. We take the natural gas distribution network in the USA as an example to demonstrate the introduced concepts and methods. Specifically, the proposed macroscopic and microscopic evaluations provide us a means of precisely identifying transmission bottlenecks in the U.S. interstate pipeline network, ranking the effects of supply disruptions on individual nodes, and planning geographically advantageous locations for natural gas storage. These findings can offer policy makers, planners, and network managers with further insights into emergency planning as well as possible design improvement. - Highlights: ► This paper evaluates distribution robustness by defining a notion of network entropy. ► The disruption impacts on individual node are evaluated by a failure spreading model. ► The robustness of the U.S. natural gas distribution network is studied. ► Results reveal pipeline bottlenecks, the node rank, and potential storage locations. ► Possible strategies for mitigating the impacts of supply disruptions are discussed.

  13. Robustness evaluation of cutting tool maintenance planning for soft ground tunneling projects

    Directory of Open Access Journals (Sweden)

    Alena Conrads

    2018-03-01

    Full Text Available Tunnel boring machines require extensive maintenance and inspection effort to provide a high availability. The cutting tools of the cutting wheel must be changed timely upon reaching a critical condition. While one possible maintenance strategy is to change tools only when it is absolutely necessary, tools can also be changed preventively to avoid further damages. Such different maintenance strategies influence the maintenance duration and the overall project performance. However, determine downtime related to a particular maintenance strategy is still a challenging task. This paper shows an analysis of the robustness to achieve the planned project performance of a maintenance strategy considering uncertainties of wear behavior of the cutting tools. A simulation based analysis is presented, implementing an empirical wear prediction model. Different strategies of maintenance planning are compared by performing a parameter variation study including Monte-Carlo simulations. The maintenance costs are calculated and evaluated with respect to their robustness. Finally, an improved and robust maintenance strategy has been determined. Keywords: Mechanized tunneling, Maintenance, Wear of cutting tools, Process simulation, Robustness, Uncertainty modeling

  14. Re-evaluating fault zone evolution, geometry, and slip rate along the restraining bend of the southern San Andreas Fault Zone

    Science.gov (United States)

    Blisniuk, K.; Fosdick, J. C.; Balco, G.; Stone, J. O.

    2017-12-01

    This study presents new multi-proxy data to provide an alternative interpretation of the late -to-mid Quaternary evolution, geometry, and slip rate of the southern San Andreas fault zone, comprising of the Garnet Hill, Banning, and Mission Creek fault strands, along its restraining bend near the San Bernardino Mountains and San Gorgonio Pass. Present geologic and geomorphic studies in the region indicate that as the Mission Creek and Banning faults diverge from one another in the southern Indio Hills, the Banning Fault Strand accommodates the majority of lateral displacement across the San Andreas Fault Zone. In this currently favored kinematic model of the southern San Andreas Fault Zone, slip along the Mission Creek Fault Strand decreases significantly northwestward toward the San Gorgonio Pass. Along this restraining bend, the Mission Creek Fault Strand is considered to be inactive since the late -to-mid Quaternary ( 500-150 kya) due to the transfer of plate boundary strain westward to the Banning and Garnet Hills Fault Strands, the Jacinto Fault Zone, and northeastward, to the Eastern California Shear Zone. Here, we present a revised geomorphic interpretation of fault displacement, initial 36Cl/10Be burial ages, sediment provenance data, and detrital geochronology from modern catchments and displaced Quaternary deposits that improve across-fault correlations. We hypothesize that continuous large-scale translation of this structure has occurred throughout its history into the present. Accordingly, the Mission Creek Fault Strand is active and likely a primary plate boundary fault at this latitude.

  15. Evaluation of Structural Robustness against Column Loss: Methodology and Application to RC Frame Buildings.

    Science.gov (United States)

    Bao, Yihai; Main, Joseph A; Noh, Sam-Young

    2017-08-01

    A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness.

  16. Design of robust reliable control for T-S fuzzy Markovian jumping delayed neutral type neural networks with probabilistic actuator faults and leakage delays: An event-triggered communication scheme.

    Science.gov (United States)

    Syed Ali, M; Vadivel, R; Saravanakumar, R

    2018-06-01

    This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Robustness-based evaluation of hydropower infrastructure design under climate change

    Directory of Open Access Journals (Sweden)

    Mehmet Ümit Taner

    2017-01-01

    Full Text Available The conventional tools of decision-making in water resources infrastructure planning have been developed for problems with well-characterized uncertainties and are ill-suited for problems involving climate nonstationarity. In the past 20 years, a predict-then-act-based approach to the incorporation of climate nonstationarity has been widely adopted in which the outputs of bias-corrected climate model projections are used to evaluate planning options. However, the ambiguous nature of results has often proved unsatisfying to decision makers. This paper presents the use of a bottom-up, decision scaling framework for the evaluation of water resources infrastructure design alternatives regarding their robustness to climate change and expected value of performance. The analysis begins with an assessment of the vulnerability of the alternative designs under a wide domain of systematically-generated plausible future climates and utilizes downscaled climate projections ex post to inform likelihoods within a risk-based evaluation. The outcomes under different project designs are compared by way of a set of decision criteria, including the performance under the most likely future, expected value of performance across all evaluated futures and robustness. The method is demonstrated for the design of a hydropower system in sub-Saharan Africa and is compared to the results that would be found using a GCM-based, scenario-led analysis. The results indicate that recommendations from the decision scaling analysis can be substantially different from the scenario-led approach, alleviate common shortcomings related to the use of climate projections in water resources planning, and produce recommendations that are more robust to future climate uncertainty.

  18. Reliability Evaluation of Service-Oriented Architecture Systems Considering Fault-Tolerance Designs

    Directory of Open Access Journals (Sweden)

    Kuan-Li Peng

    2014-01-01

    strategies. Sensitivity analysis of SOA at both coarse and fine grain levels is also studied, which can be used to efficiently identify the critical parts within the system. Two SOA system scenarios based on real industrial practices are studied. Experimental results show that the proposed SOA model can be used to accurately depict the behavior of SOA systems. Additionally, a sensitivity analysis that quantizes the effects of system structure as well as fault tolerance on the overall reliability is also studied. On the whole, the proposed reliability modeling and analysis framework may help the SOA system service provider to evaluate the overall system reliability effectively and also make smarter improvement plans by focusing resources on enhancing reliability-sensitive parts within the system.

  19. Direct evaluation of fault trees using object-oriented programming techniques

    Science.gov (United States)

    Patterson-Hine, F. A.; Koen, B. V.

    1989-01-01

    Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.

  20. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  1. Plotting and analysis of fault trees in safety evaluation of nuclear power plants

    International Nuclear Information System (INIS)

    Wild, A.

    1979-12-01

    Fault tree analysis is a useful tool in determining the safety and reliability of nuclear power plants. The main strength of the fault tree method, its ability to detect cross-links between systems, can be used only if fault trees are constructed for complete nuclear generating stations. Such trees are large and have to be handled by computers. A system is described for handling fault trees using small computers such as the HP-1000 with disc drive, graphics terminal and x-y plotter

  2. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  3. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    Science.gov (United States)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  4. Evaluation of permeability of Nojima fault by hydrophone VSP; Hydrophone VSP ni yoru Nojima danso no tosuisei hyoka

    Energy Technology Data Exchange (ETDEWEB)

    Kiguchi, T; Ito, H; Kuwahara, Y; Miyazaki, T [Geological Survey of Japan, Tsukuba (Japan)

    1996-05-01

    The multi-offset hydrophone VSP experiments were carried out using a 750m deep borehole as the oscillation receiver, which penetrates the Nojima fault, to detect water-permeable cracks and evaluate their characteristics. Soil around the borehole is of granodiorite, and fault clay is found at a depth in a range from 623 to 624m. A total of 4 dynamite tunnels were provided around the borehole as the focus. The VSP results show that the tube waves are generated at 22 depths, including the depth at which fault clay is found. However, these waves are generated at only 6 depths in an approximately 150m long fracture zone, suggesting that the cracks in the zone are not necessarily permeable. It is also found that crack angle determined by the analysis of tube waves almost coincides with that of fault clay determined by the core, BHTV and FMI, and that permeability is of the order of 100md at a depth of fault clay or shallower. 3 refs., 2 figs., 2 tabs.

  5. Systematic evaluation of fault trees using real-time model checker UPPAAL

    International Nuclear Information System (INIS)

    Cha, Sungdeok; Son, Hanseong; Yoo, Junbeom; Jee, Eunkyung; Seong, Poong Hyun

    2003-01-01

    Fault tree analysis, the most widely used safety analysis technique in industry, is often applied manually. Although techniques such as cutset analysis or probabilistic analysis can be applied on the fault tree to derive further insights, they are inadequate in locating flaws when failure modes in fault tree nodes are incorrectly identified or when causal relationships among failure modes are inaccurately specified. In this paper, we demonstrate that model checking technique is a powerful tool that can formally validate the accuracy of fault trees. We used a real-time model checker UPPAAL because the system we used as the case study, nuclear power emergency shutdown software named Wolsong SDS2, has real-time requirements. By translating functional requirements written in SCR-style tabular notation into timed automata, two types of properties were verified: (1) if failure mode described in a fault tree node is consistent with the system's behavioral model; and (2) whether or not a fault tree node has been accurately decomposed. A group of domain engineers with detailed technical knowledge of Wolsong SDS2 and safety analysis techniques developed fault tree used in the case study. However, model checking technique detected subtle ambiguities present in the fault tree

  6. Evaluation of fault coverage for digitalized system in nuclear power plants using VHDL

    International Nuclear Information System (INIS)

    Kim, Suk Joon; Lee, Jun Suk; Seong, Poong Hyun

    2003-01-01

    Fault coverage of digital systems is found to be one of the most important factors in the safety analysis of nuclear power plants. Several axiomatic models for the estimation of fault coverage of digital systems have been proposed, but to apply those axiomatic models to real digital systems, parameters that the axiomatic models require should be approximated using analytic methods, empirical methods or expert opinions. In this paper, we apply the fault injection method to VHDL computer simulation model of a real digital system which provides the protection function to nuclear power plants, for the approximation of fault detection coverage of the digital system. As a result, the fault detection coverage of the digital system could be obtained

  7. A Framework For Evaluating Comprehensive Fault Resilience Mechanisms In Numerical Programs

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Peng, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-09

    As HPC systems approach Exascale, their circuit feature will shrink, while their overall size will grow, all at a fixed power limit. These trends imply that soft faults in electronic circuits will become an increasingly significant problem for applications that run on these systems, causing them to occasionally crash or worse, silently return incorrect results. This is motivating extensive work on application resilience to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and resilience techniques. Effective use of such techniques requires a detailed understanding of (1) which vulnerable parts of the application are most worth protecting (2) the performance and resilience impact of fault resilience mechanisms on the application. This paper presents FaultTelescope, a tool that combines these two and generates actionable insights by presenting in an intuitive way application vulnerabilities and impact of fault resilience mechanisms on applications.

  8. FTREX Testing Report (Fault Tree Reliability Evaluation eXpert) Version 1.5

    International Nuclear Information System (INIS)

    Jung, Woo Sik

    2009-07-01

    In order to verify FTREX functions and to confirm the correctness of FTREX 1.5, various tests were performed 1.fault trees with negates 2.fault trees with house events 3.fault trees with multiple tops 4.fault trees with logical loops 5.fault trees with initiators, house events, negates, logical loops, and flag events By using the automated cutest propagation test, the FTREX 1.5 functions are verified. FTREX version 1.3 and later versions have capability to perform bottom-up cutset-propagation test in order check cutest status. FTREX 1.5 always generates the proper minimal cut sets. All the output cutsets of the tested problems are MCSs (Minimal Cut Sets) and have no non-minimal cutsets and improper cutsets. The improper cutsets are those that have no effect to top, have multiple initiators, or have disjoint events A * -A

  9. Probabilistic evaluation of near-field ground motions due to buried-rupture earthquakes caused by undefined faults

    International Nuclear Information System (INIS)

    Shohei Motohashi; Katsumi Ebisawa; Masaharu Sakagmi; Kazuo Dan; Yasuhiro Ohtsuka; Takao Kagawa

    2005-01-01

    The Nuclear Safety Commission of Japan has been reviewing the current Guideline for Earthquake Resistant Design of Nuclear Power Plants since July 2001. According to recent earthquake research, one of the main issues in the review is the design earthquake motion due to close-by earthquakes caused by undefined faults. This paper proposes a probabilistic method for covering variations of earthquake magnitude and location of undefined faults by strong motion simulation technique based on fault models for scenario earthquakes, and describes probabilistic response spectra due to close-by scenario earthquakes caused by undefined faults. Horizontal uniform hazard spectra evaluated by a hybrid technique are compared with those evaluated by an empirical approach. The response spectra with a damping factor of 5% at 0.02 s simulated by the hybrid technique are about 160, 340, 570, and 800 cm/s/s for annual exceedance probabilities of 10 -3 , 10 -4 , 10 -5 , and 10 -6 , respectively, which are in good agreement with the response spectra evaluated by the empirical approach. It is also recognized that the response spectrum proposed by Kato et al. (2004) as the upper level of the strong motion records of buried-rupture earthquakes corresponded to the uniform hazard spectra between 10 -5 and 10 -4 in the period range shorter than 0.4 s. (authors)

  10. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  11. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  12. H infinity Integrated Fault Estimation and Fault Tolerant Control of Discrete-time Piecewise Linear Systems

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Bak, Thomas

    2012-01-01

    In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then, the es...

  13. A new robustness analysis for climate policy evaluations: A CGE application for the EU 2020 targets

    International Nuclear Information System (INIS)

    Hermeling, Claudia; Löschel, Andreas; Mennel, Tim

    2013-01-01

    This paper introduces a new method for stochastic sensitivity analysis for computable general equilibrium (CGE) model based on Gauss Quadrature and applies it to check the robustness of a large-scale climate policy evaluation. The revised version of the Gauss-quadrature approach to sensitivity analysis reduces computations considerably vis-à-vis the commonly applied Monte-Carlo methods; this allows for a stochastic sensitivity analysis also for large scale models and multi-dimensional changes of parameters. In the application, an impact assessment of EU2020 climate policy, we focus on sectoral elasticities that are part of the basic parameters of the model and have been recently determined by econometric estimation, alongside with standard errors. The impact assessment is based on the large scale CGE model PACE. We show the applicability of the Gauss-quadrature approach and confirm the robustness of the impact assessment with the PACE model. The variance of the central model outcomes is smaller than their mean by order four to eight, depending on the aggregation level (i.e. aggregate variables such as GDP show a smaller variance than sectoral output). - Highlights: ► New, simplified method for stochastic sensitivity analysis for CGE analysis. ► Gauss quadrature with orthogonal polynomials. ► Application to climate policy—the case of the EU 2020 targets

  14. Working Conditions-Aware Fault Injection Technique

    OpenAIRE

    Alouani , Ihsen; Niar , Smail; Jemai , Mohamed; Kuradi , Fadi; Abid , Mohamed

    2012-01-01

    International audience; With new integration rates, the circuits sensitivity to environmental and working conditions has increased dramatically. Thus, presenting reliable, less consuming energy and error resilient architectures is being one of the major problems to deal with. Besides, evaluating robustness and effectiveness of the proposed architectures is also an urgent need. In this paper, we present an extension of SimpleScalar simulation tool having the ability to inject faults in a given...

  15. Evaluation of the robustness of estimating five components from a skin spectral image

    Science.gov (United States)

    Akaho, Rina; Hirose, Misa; Tsumura, Norimichi

    2018-04-01

    We evaluated the robustness of a method used to estimate five components (i.e., melanin, oxy-hemoglobin, deoxy-hemoglobin, shading, and surface reflectance) from the spectral reflectance of skin at five wavelengths against noise and a change in epidermis thickness. We also estimated the five components from recorded images of age spots and circles under the eyes using the method. We found that noise in the image must be no more 0.1% to accurately estimate the five components and that the thickness of the epidermis affects the estimation. We acquired the distribution of major causes for age spots and circles under the eyes by applying the method to recorded spectral images.

  16. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    Science.gov (United States)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  17. Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-01-01

    Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.

  18. Evaluation of Robustness to Setup and Range Uncertainties for Head and Neck Patients Treated With Pencil Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Malyapa, Robert [Centre for Proton Radiotherapy, PSI (Switzerland); Lowe, Matthew [Manchester Academic Health Science Centre, Faculty of Medical and Human Sciences, University of Manchester (United Kingdom); Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester (United Kingdom); Bolsi, Alessandra; Lomax, Antony J. [Centre for Proton Radiotherapy, PSI (Switzerland); Weber, Damien C. [University of Zürich, Zürich (Switzerland); Albertini, Francesca, E-mail: francesca.albertini@psi.ch [Centre for Proton Radiotherapy, PSI (Switzerland)

    2016-05-01

    Purpose: To evaluate the robustness of head and neck plans for treatment with intensity modulated proton therapy to range and setup errors, and to establish robustness parameters for the planning of future head and neck treatments. Methods and Materials: Ten patients previously treated were evaluated in terms of robustness to range and setup errors. Error bar dose distributions were generated for each plan, from which several metrics were extracted and used to define a robustness database of acceptable parameters over all analyzed plans. The patients were treated in sequentially delivered series, and plans were evaluated for both the first series and for the combined error over the whole treatment. To demonstrate the application of such a database in the head and neck, for 1 patient, an alternative treatment plan was generated using a simultaneous integrated boost (SIB) approach and plans of differing numbers of fields. Results: The robustness database for the treatment of head and neck patients is presented. In an example case, comparison of single and multiple field plans against the database show clear improvements in robustness by using multiple fields. A comparison of sequentially delivered series and an SIB approach for this patient show both to be of comparable robustness, although the SIB approach shows a slightly greater sensitivity to uncertainties. Conclusions: A robustness database was created for the treatment of head and neck patients with intensity modulated proton therapy based on previous clinical experience. This will allow the identification of future plans that may benefit from alternative planning approaches to improve robustness.

  19. Evaluation of Robustness to Setup and Range Uncertainties for Head and Neck Patients Treated With Pencil Beam Scanning Proton Therapy

    International Nuclear Information System (INIS)

    Malyapa, Robert; Lowe, Matthew; Bolsi, Alessandra; Lomax, Antony J.; Weber, Damien C.; Albertini, Francesca

    2016-01-01

    Purpose: To evaluate the robustness of head and neck plans for treatment with intensity modulated proton therapy to range and setup errors, and to establish robustness parameters for the planning of future head and neck treatments. Methods and Materials: Ten patients previously treated were evaluated in terms of robustness to range and setup errors. Error bar dose distributions were generated for each plan, from which several metrics were extracted and used to define a robustness database of acceptable parameters over all analyzed plans. The patients were treated in sequentially delivered series, and plans were evaluated for both the first series and for the combined error over the whole treatment. To demonstrate the application of such a database in the head and neck, for 1 patient, an alternative treatment plan was generated using a simultaneous integrated boost (SIB) approach and plans of differing numbers of fields. Results: The robustness database for the treatment of head and neck patients is presented. In an example case, comparison of single and multiple field plans against the database show clear improvements in robustness by using multiple fields. A comparison of sequentially delivered series and an SIB approach for this patient show both to be of comparable robustness, although the SIB approach shows a slightly greater sensitivity to uncertainties. Conclusions: A robustness database was created for the treatment of head and neck patients with intensity modulated proton therapy based on previous clinical experience. This will allow the identification of future plans that may benefit from alternative planning approaches to improve robustness.

  20. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    Science.gov (United States)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  1. A critical evaluation of worst case optimization methods for robust intensity-modulated proton therapy planning

    International Nuclear Information System (INIS)

    Fredriksson, Albin; Bokrantz, Rasmus

    2014-01-01

    Purpose: To critically evaluate and compare three worst case optimization methods that have been previously employed to generate intensity-modulated proton therapy treatment plans that are robust against systematic errors. The goal of the evaluation is to identify circumstances when the methods behave differently and to describe the mechanism behind the differences when they occur. Methods: The worst case methods optimize plans to perform as well as possible under the worst case scenario that can physically occur (composite worst case), the combination of the worst case scenarios for each objective constituent considered independently (objectivewise worst case), and the combination of the worst case scenarios for each voxel considered independently (voxelwise worst case). These three methods were assessed with respect to treatment planning for prostate under systematic setup uncertainty. An equivalence with probabilistic optimization was used to identify the scenarios that determine the outcome of the optimization. Results: If the conflict between target coverage and normal tissue sparing is small and no dose-volume histogram (DVH) constraints are present, then all three methods yield robust plans. Otherwise, they all have their shortcomings: Composite worst case led to unnecessarily low plan quality in boundary scenarios that were less difficult than the worst case ones. Objectivewise worst case generally led to nonrobust plans. Voxelwise worst case led to overly conservative plans with respect to DVH constraints, which resulted in excessive dose to normal tissue, and less sharp dose fall-off than the other two methods. Conclusions: The three worst case methods have clearly different behaviors. These behaviors can be understood from which scenarios that are active in the optimization. No particular method is superior to the others under all circumstances: composite worst case is suitable if the conflicts are not very severe or there are DVH constraints whereas

  2. Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.

    Science.gov (United States)

    Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C

    2015-06-01

    An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.

  3. Evaluation of robustness in the validation of total organic carbon (TOC) methodology

    International Nuclear Information System (INIS)

    Benedetti, Stella; Monteiro, Elisiane G.; Almeida, Erika V.; Oliveira, Ideli M.; Cerqueira Filho, Ademar C.; Mengatti, Jair; Fukumori, Neuza T.O.; Matsuda, Margareth M.N.

    2009-01-01

    Water is used in many steps of production and quality control as raw material for reagent preparation or dilution of solutions and for cleaning apparatus and room areas in the pharmaceutical industry, including radiopharmaceutical plants. Regulatory requirements establish specifications of purified water for different purposes. The quality of water is essential to guarantee the safe utilization of radiopharmaceuticals. A variety of methods and systems can be used to produce purified water and water for injection and all of them must fulfill the requirements for their specific use, which include TOC (total organic carbon) analysis, an indirect measurement of organic molecules present in water. The principle of TOC method is the oxidation of organic molecules to carbon dioxide, related to the carbon concentration. The aim of this study was to evaluate the parameters of robustness in TOC method in water used in the production and quality control procedures in the Radiopharmacy Directory (DIRF), according to Resolution 899 from ANVISA (National Sanitary Agency). Purified water were obtained from Milli-RX45 system. TOC standard solutions in the range of 100-1000 ppb were prepared with potassium hydrogen phthalate anhydride, transferred to vials and sequentially analyzed by a catalytic photo-oxidation reaction with a TOC model Vwp equipment from Shimadzu Corporation (Japan). The evaluated parameters were: oxidizing volume from 0.5 to 2.5 mL, acidifying volume from 1 to 5%, integration time for TC (total carbon) and IC (inorganic carbon) curves from 2 to 10 minutes. (author)

  4. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    Science.gov (United States)

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  5. Effect Evaluation of Fault Resistance on the Operating Behavior of a Distance Relay

    Directory of Open Access Journals (Sweden)

    K. H. Le

    2018-06-01

    Full Text Available This paper presents an application of a certain distance protection relay with a quadrilateral characteristic approach for the protection of the 110kV Duy Xuyen - Thang Binh transmission line in Vietnam using measured data from one terminal line. We propose the building process of a Matlab Simulink model for this relay that combines fault detection and classification block, apparent impedance calculation block for all types of faults and a trip logic block of three zone protection coordination. The proposed relay model is further tested using various fault scenarios on the transmission line. It is important to assess what happened, the actual conditions, the causes of mal-operation etc. Detailed explanation and results indicate that the proposed model behavior will help users to perform tests which correctly simulate real-world conditions besides that it properly interprets test results and troubleshoot distance function problems when results are not as expected.

  6. Reliability database development for use with an object-oriented fault tree evaluation program

    Science.gov (United States)

    Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann

    1989-01-01

    A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.

  7. Fault Management Technologies - Metrics Evaluation and V&V, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Functional robustness, resulting from superior engineering design, along with appropriate and timely mitigating actions, is a key enabler for satisfying complex...

  8. Fault Management Technologies - Metrics Evaluation and V&V, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Functional robustness, resulting from superior engineering design, along with appropriate and timely mitigating actions, is a key enabler for satisfying complex...

  9. Some fundamental aspects of fault-tree and digraph-matrix relationships for a systems-interaction evaluation procedure

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1982-01-01

    Recent events, such as Three Mile Island-2, Brown's Ferry-3, and Crystal River-3, have demonstrated that complex accidents can occur as a result of dependent (common-cause/mode) failures. These events are now being called Systems Interactions. A procedure for the identification and evaluation of Systems Interactions is being developed by the NRC. Several national laboratories and utilities have contributed preliminary procedures. As a result, there are several important views of the Systems Interaction problem. This report reviews some fundamental mathematical background of both fault-oriented and success-oriented risk analyses in order to bring out the advantages and disadvantages of each. In addition, it outlines several fault-oriented/dependency analysis approaches and several success-oriented/digraph-matrix approaches. The objective is to obtain a broad perspective of present options for solving the Systems Interaction problem

  10. Introduction to fault tree analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Lambert, H.E.

    1975-01-01

    An elementary, engineering oriented introduction to fault tree analysis is presented. The basic concepts, techniques and applications of fault tree analysis, FTA, are described. The two major steps of FTA are identified as (1) the construction of the fault tree and (2) its evaluation. The evaluation of the fault tree can be qualitative or quantitative depending upon the scope, extensiveness and use of the analysis. The advantages, limitations and usefulness of FTA are discussed

  11. Fractal dimension analysis for robust ultrasonic non-destructive evaluation (NDE) of coarse grained materials

    Science.gov (United States)

    Li, Minghui; Hayward, Gordon

    2018-04-01

    Over the recent decades, there has been a growing demand on reliable and robust non-destructive evaluation (NDE) of structures and components made from coarse grained materials such as alloys, stainless steels, carbon-reinforced composites and concrete; however, when inspected using ultrasound, the flaw echoes are usually contaminated by high-level, time-invariant, and correlated grain noise originating from the microstructure and grain boundaries, leading to pretty low signal-to-noise ratio (SNR) and the flaw information being obscured or completely hidden by the grain noise. In this paper, the fractal dimension analysis of the A-scan echoes is investigated as a measure of complexity of the time series to distinguish the echoes originating from the real defects and the grain noise, and then the normalized fractal dimension coefficients are applied to the amplitudes as the weighting factor to enhance the SNR and defect detection. Experiments on industrial samples of the mild steel and the stainless steel are conducted and the results confirm the great benefits of the method.

  12. Biological dosimetry intercomparison exercise: an evaluation of Triage and routine mode results by robust methods

    International Nuclear Information System (INIS)

    Di Giorgio, M.; Vallerga, M.B.; Radl, A.; Taja, M.R.; Barquinero, J.F.; Seoane, A.; De Luca, J.; Guerrero Carvajal, Y.C.; Stuck Oliveira, M.S.; Valdivia, P.; García Lima, O.; Lamadrid, A.; González Mesa, J.; Romero Aguilera, I.; Mandina Cardoso, T.; Arceo Maldonado, C.; Espinoza, M.E.; Martínez López, W.; Lloyd, D.C.; Méndez Acuña, L.; Di Tomaso, M.V.; Roy, L.; Lindholm, C.; Romm, H.; Güçlü, I.

    2011-01-01

    Well-defined protocols and quality management standards are indispensable for biological dosimetry laboratories. Participation in periodic proficiency testing by interlaboratory comparisons is also required. This harmonization is essential if a cooperative network is used to respond to a mass casualty event. Here we present an international intercomparison based on dicentric chromosome analysis for dose assessment performed in the framework of the IAEA Regional Latin American RLA/9/054 Project. The exercise involved 14 laboratories, 8 from Latin America and 6 from Europe. The performance of each laboratory and the reproducibility of the exercise were evaluated using robust methods described in ISO standards. The study was based on the analysis of slides from samples irradiated with 0.75 (DI) and 2.5 Gy (DII). Laboratories were required to score the frequency of dicentrics and convert them to estimated doses, using their own dose-effect curves, after the analysis of 50 or 100 cells (triage mode) and after conventional scoring of 500 cells or 100 dicentrics. In the conventional scoring, at both doses, all reported frequencies were considered as satisfactory, and two reported doses were considered as questionable. The analysis of the data dispersion among the dicentric frequencies and among doses indicated a better reproducibility for estimated doses (15.6% for DI and 8.8% for DII) than for frequencies (24.4% for DI and 11.4% for DII), expressed by the coefficient of variation. In the two triage modes, although robust analysis classified some reported frequencies or doses as unsatisfactory or questionable, all estimated doses were in agreement with the accepted error of ±0.5 Gy. However, at the DI dose and for 50 scored cells, 5 out of the 14 reported confidence intervals that included zero dose and could be interpreted as false negatives. This improved with 100 cells, where only one confidence interval included zero dose. At the DII dose, all estimations fell within

  13. Optimal design of modular cogeneration plants for hospital facilities and robustness evaluation of the results

    International Nuclear Information System (INIS)

    Gimelli, A.; Muccillo, M.; Sannino, R.

    2017-01-01

    Highlights: • A specific methodology has been set up based on genetic optimization algorithm. • Results highlight a tradeoff between primary energy savings (TPES) and simple payback (SPB). • Optimized plant configurations show TPES exceeding 18% and SPB of approximately three years. • The study aims to identify the most stable plant solutions through the robust design optimization. • The research shows how a deterministic definition of the decision variables could lead to an overestimation of the results. - Abstract: The widespread adoption of combined heat and power generation is widely recognized as a strategic goal to achieve significant primary energy savings and lower carbon dioxide emissions. In this context, the purpose of this research is to evaluate the potential of cogeneration based on reciprocating gas engines for some Italian hospital buildings. Comparative analyses have been conducted based on the load profiles of two specific hospital facilities and through the study of the cogeneration system-user interaction. To this end, a specific methodology has been set up by coupling a specifically developed calculation algorithm to a genetic optimization algorithm, and a multi-objective approach has been adopted. The results from the optimization problem highlight a clear trade-off between total primary energy savings (TPES) and simple payback period (SPB). Optimized plant configurations and management strategies show TPES exceeding 18% for the reference hospital facilities and multi–gas engine solutions along with a minimum SPB of approximately three years, thereby justifying the European regulation promoting cogeneration. However, designing a CHP plant for a specific energetic, legislative or market scenario does not guarantee good performance when these scenarios change. For this reason, the proposed methodology has been enhanced in order to focus on some innovative aspects. In particular, this study proposes an uncommon and effective approach

  14. Fault diagnosis and fault-tolerant control and guidance for aerospace vehicles from theory to application

    CERN Document Server

    Zolghadri, Ali; Cieslak, Jerome; Efimov, Denis; Goupil, Philippe

    2014-01-01

    Fault Diagnosis and Fault-Tolerant Control and Guidance for Aerospace demonstrates the attractive potential of recent developments in control for resolving such issues as improved flight performance, self-protection and extended life of structures. Importantly, the text deals with a number of practically significant considerations: tuning, complexity of design, real-time capability, evaluation of worst-case performance, robustness in harsh environments, and extensibility when development or adaptation is required. Coverage of such issues helps to draw the advanced concepts arising from academic research back towards the technological concerns of industry. Initial coverage of basic definitions and ideas and a literature review gives way to a treatment of important electrical flight control system failures: the oscillatory failure case, runaway, and jamming. Advanced fault detection and diagnosis for linear and nonlinear systems are described. Lastly recovery strategies appropriate to remaining acuator/sensor/c...

  15. Evaluation of the contribution of license renewal of nuclear power plants to fault reduction in the U.S

    International Nuclear Information System (INIS)

    Chiba, Goro

    2008-01-01

    Although nuclear power plants in the U.S. were originally permitted to operate for 40 years, operating periods of many plants have been extended by license renewal for another 20 years. On the other hand, plant life management of nuclear power plants in Japan is carried out assuming long-term operation, and the licensee submits aging technology assessment reports before the plant has been operating commercially for 30 years, and then every ten years thereafter, and receives an evaluation by the authorities. In this paper, trend analysis using the INSS database on faults at nuclear power plants overseas, state of implementation of relevant aging management programs, and the effects of license renewal on preservation activities are examined. It is shown that the aging management program identified that many of the cases of fatigue, FAC, and a closed cycle cooling system have been addressed. As a result of analyzing the fault number for each unit, the number of aging faults trends to decrease after applying for license renewal. Therefore, the U.S. license renewal system is considered to be effective for plant life management, and hence the plant life management in Japan, which is substantially equivalent to the U.S. system, is valid. (author)

  16. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  17. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  18. Algorithms and programs for evaluating fault trees with multi-state components

    International Nuclear Information System (INIS)

    Wickenhaeuser, A.

    1989-07-01

    Part 1 and 2 of the report contain a summary overview of methods and algorithms for the solution of fault tree analysis problems. The following points are treated in detail: Treatment of fault tree components with more than two states. Acceleration of the solution algorithms. Decomposition and modularization of extensive systems. Calculation of the structural function and the exact occurrence probability. Treatment of statistical dependencies. A flexible tool to be employed in solving these problems is the method of forming Boolean variables with restrictions. In this way, components with more than two states can be treated, the possibilities of forming modules expanded, and statistical dependencies treated. Part 3 contains descriptions of the MUSTAFA, MUSTAMO, PASPI, and SIMUST computer programs based on these methods. (orig./HP) [de

  19. Near-trench slip potential of megaquakes evaluated from fault properties and conditions

    Science.gov (United States)

    Hirono, Tetsuro; Tsuda, Kenichi; Tanikawa, Wataru; Ampuero, Jean-Paul; Shibazaki, Bunichiro; Kinoshita, Masataka; Mori, James J.

    2016-01-01

    Near-trench slip during large megathrust earthquakes (megaquakes) is an important factor in the generation of destructive tsunamis. We proposed a new approach to assessing the near-trench slip potential quantitatively by integrating laboratory-derived properties of fault materials and simulations of fault weakening and rupture propagation. Although the permeability of the sandy Nankai Trough materials are higher than that of the clayey materials from the Japan Trench, dynamic weakening by thermally pressurized fluid is greater at the Nankai Trough owing to higher friction, although initially overpressured fluid at the Nankai Trough restrains the fault weakening. Dynamic rupture simulations reproduced the large slip near the trench observed in the 2011 Tohoku-oki earthquake and predicted the possibility of a large slip of over 30 m for the impending megaquake at the Nankai Trough. Our integrative approach is applicable globally to subduction zones as a novel tool for the prediction of extreme tsunami-producing near-trench slip. PMID:27321861

  20. Reliability Evaluation Methodologies of Fault Tolerant Techniques of Digital I and C Systems in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Seong, Poong Hyun; Lee, Seung Jun

    2011-01-01

    Since the reactor protection system was replaced from analog to digital, digital reactor protection system has 4 redundant channels and each channel has several modules. It is necessary for various fault tolerant techniques to improve availability and reliability due to using complex components in DPPS. To use the digital system, it is necessary to improve the reliability and availability of a system through fault-tolerant techniques. Several researches make an effort to effects of fault tolerant techniques. However, the effects of fault tolerant techniques have not been properly considered yet in most fault tree models. Various fault-tolerant techniques, which used in digital system in NPPs, should reflect in fault tree analysis for getting lower system unavailability and more reliable PSA. When fault-tolerant techniques are modeled in fault tree, categorizing the module to detect by each fault tolerant techniques, fault coverage, detection period and the fault recovery should be considered. Further work will concentrate on various aspects for fault tree modeling. We will find other important factors, and found a new theory to construct the fault tree model

  1. Design and evaluation of a robust dynamic neurocontroller for a multivariable aircraft control problem

    Science.gov (United States)

    Troudet, T.; Garg, S.; Merrill, W.

    1992-01-01

    The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.

  2. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  3. User's manual of a computer code for seismic hazard evaluation for assessing the threat to a facility by fault model. SHEAT-FM

    International Nuclear Information System (INIS)

    Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide

    2005-09-01

    To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)

  4. Real-Time Risk and Fault Management in the Mission Evaluation Room for the International Space Station

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.; Novack, S.D.

    2003-05-30

    Effective anomaly resolution in the Mission Evaluation Room (MER) of the International Space Station (ISS) requires consideration of risk in the process of identifying faults and developing corrective actions. Risk models such as fault trees from the ISS Probabilistic Risk Assessment (PRA) can be used to support anomaly resolution, but the functionality required goes significantly beyond what the PRA could provide. Methods and tools are needed that can systematically guide the identification of root causes for on-orbit anomalies, and to develop effective corrective actions that address the event and its consequences without undue risk to the crew or the mission. In addition, an overall information management framework is needed so that risk can be systematically incorporated in the process, and effectively communicated across all the disciplines and levels of management within the space station program. The commercial nuclear power industry developed such a decision making framework, known as the critical safety function approach, to guide emergency response following the accident at Three Mile Island in 1979. This report identifies new methods, tools, and decision processes that can be used to enhance anomaly resolution in the ISS Mission Evaluation Room. Current anomaly resolution processes were reviewed to identify requirements for effective real-time risk and fault management. Experience gained in other domains, especially the commercial nuclear power industry, was reviewed to identify applicable methods and tools. Recommendations were developed for next-generation tools to support MER anomaly resolution, and a plan for implementing the recommendations was formulated. The foundation of the proposed tool set will be a ''Mission Success Framework'' designed to integrate and guide the anomaly resolution process, and to facilitate consistent communication across disciplines while focusing on the overriding importance of mission success.

  5. Real-Time Risk and Fault Management in the Mission Evaluation Room of the International Space Station

    Energy Technology Data Exchange (ETDEWEB)

    William R. Nelson; Steven D. Novack

    2003-05-01

    Effective anomaly resolution in the Mission Evaluation Room (MER) of the International Space Station (ISS) requires consideration of risk in the process of identifying faults and developing corrective actions. Risk models such as fault trees from the ISS Probablistic Risk Assessment (PRA) can be used to support anomaly resolution, but the functionality required goes significantly beyond what the PRA could provide. Methods and tools are needed that can systematically guide the identification of root causes for on-orbit anomalies, and to develop effective corrective actions that address the event and its consequences without undue risk to the crew or the mission. In addition, an overall information management framework is needed so that risk can be systematically incorporated in the process, and effectively communicated across all the disciplines and levels of management within the space station program. The commercial nuclear power industry developed such a decision making framework, known as the critical safety function approach, to guide emergency response following the accident at Three Mile Island in 1979. This report identifies new methods, tools, and decision processes that can be used to enhance anomaly resolution in the ISS Mission Evaluation Room. Current anomaly resolution processes were reviewed to identify requirements for effective real-time risk and fault management. Experience gained in other domains, especially the commercial nuclear power industry, was reviewed to identify applicable methods and tools. Recommendations were developed for next-generation tools to support MER anomaly resolution, and a plan for implementing the recommendations was formulated. The foundation of the proposed toolset will be a "Mission Success Framework" designed to integrate and guide the anomaly resolution process, and to facilitate consistent communication across disciplines while focusing on the overriding importance of mission success.

  6. Pulse-Like Rupture Induced by Three-Dimensional Fault Zone Flower Structures

    KAUST Repository

    Pelties, Christian; Huang, Yihe; Ampuero, Jean-Paul

    2014-01-01

    interface. This effect is robust against a wide range of fault zone widths, absence of frictional healing, variation of initial stress conditions, attenuation, and off-fault plasticity. These numerical studies covered two-dimensional problems with fault

  7. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    International Nuclear Information System (INIS)

    Cumbest, R.J.

    2000-01-01

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

  8. Evaluation of hypotheses for right-lateral displacement of Neogene strata along the San Andreas Fault between Parkfield and Maricopa, California

    Science.gov (United States)

    Stanley, Richard G.; Barron, John A.; Powell, Charles L.

    2017-12-22

    the San Andreas Fault, that have moved relatively northwestward by 254 ± 5 km of right-lateral displacement along the fault. Our new diatom ages suggest that Santa Margarita deposition and fault displacement began about 10–8 Ma and imply long-term average slip rates along the San Andreas Fault of about 25–32 millimeters per year (mm/yr), Evaluation of Hypotheses for Right-Lateral Displacement of Neogene Strata Along the San Andreas Fault Between Parkfield and Maricopa, California By Richard G. Stanley, John A. Barron, and Charles L. Powell, II about the same as published estimates of Quaternary average slip rates based on geologic and geodetic studies.

  9. Fault detection using transmission tomography - Evaluation on the Experimental Platform of Tournemire

    International Nuclear Information System (INIS)

    Vi-Nhu-Ba, Elise

    2014-01-01

    Deep argillaceous formations have physical properties adapted to the radioactive waste disposal but their permeability properties can be modified by the presence of fractured zones; detection of these faulted zones are thus of primary importance. Several experiments have been led by IRSN in the Experimental Platform of Tournemire where faults with small vertical offsets in the deep argillaceous formation have been identified from underground installations. Some previous studies have shown the difficulty to detect this fractured zone from surface acquisitions using reflection or refraction seismic but also with electrical methods. We here propose a new seismic transmission acquisition geometry in where seismic sources are deployed at the surface and receivers are installed in the underground installations. In the scope to process these data, a new tomography algorithm has been developed in order to control the inversion parameters and also to introduce a priori information. Several synthetic tests have been led to reliably analyze the results in terms of resolution and relevance of the final image. A discontinuity of the seismic velocities in the limestones and argillites of the Tournemire Platform is evidenced for the first time by applying the algorithm to the data recently acquired. This low velocity anomaly is located just above the fracture zone visible from the underground installations and its location is also consistent with observations from the surface. (author)

  10. Mel Frequency Cepstral Coefficients: An Evaluation of Robustness of MP3 Encoded Music

    DEFF Research Database (Denmark)

    Sigurdsson, Sigurdur; Petersen, Kaare Brandt; Lehn-Schiøler, Tue

    2006-01-01

    the influence of MP3 coding for the Mel frequency cepstral coeficients (MFCCs). The main result is that the widely used subset of the MFCCs is robust at bit rates equal or higher than 128 kbits/s, for the implementations we have investigated. However, for lower bit rates, e.g., 64 kbits/s, the implementation...... of the Mel filter bank becomes an issue....

  11. Interim reliability-evaluation program: analysis of the Browns Ferry, Unit 1, nuclear plant. Appendix B - system descriptions and fault trees

    International Nuclear Information System (INIS)

    Mays, S.E.; Poloski, J.P.; Sullivan, W.H.; Trainer, J.E.; Bertucio, R.C.; Leahy, T.J.

    1982-07-01

    This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models and probabilities; and generic control circuit analyses

  12. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Science.gov (United States)

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  13. Relationship of the 2004 Mid-Niigata prefecture earthquake with geological structure. Evaluation of earthquake source fault in active folding zone

    International Nuclear Information System (INIS)

    Aoyagi, Yasuhira; Abe, Shintaro

    2007-01-01

    We compile the important points to evaluate earthquake source fault in active folding zone through a temporary aftershock observation of the 2004 Mid-Niigata Prefecture earthquake. The aftershock distribution shows spindle shape whose middle part is wide and both ends are narrow in NNE-SSW trending. The range of seismic activity corresponds well to the distribution of fold axes in this area, whose middle part is anticlinorium (some anticlines) and both ends are single anticline. In the middle part, the west dipping aftershock plane including the mainshock (M6.8) is located under the Higashiyama anticline. Another west dipping aftershock plane including the largest aftershock (M6.5) is located under the Tamugiyama and Komatsugura anticlines, and the east margin of the aftershock distribution corresponds well with Suwa-toge flexure. Therefore the present fold structure should have been formed by an accumulation of the same faults movement. In other words, it is important to refer the fold axes distribution pattern, especially with flexure, for the evaluation of earthquake source fault. In addition, we performed FEM analyses to investigate the relation of fold structure to the thickness of the sedimentary layer and the dip angle of the fault. Reverse fault movement forms asymmetric fold above the fault, which steeper slope is formed just above the upper end of the fault. As the sedimentary layer became thicker, anticline axis moved to hanging wall side in the fold structure. As the dip angle became smaller, the wavelength of the fold became longer and the fold structure grew highly asymmetric. Thus the shape of the fold structure is useful as an index to estimate the blind thrust below it. (author)

  14. Evaluating robustness of a diesel-degrading bacterial consortium isolated from contaminated soil

    DEFF Research Database (Denmark)

    Sydow, Mateusz; Owsianiak, Mikolaj; Szczepaniak, Zuzanna

    2016-01-01

    It is not known whether diesel-degrading bacterial communities are structurally and functionally robust when exposed to different hydrocarbon types. Here, we exposed a diesel-degrading consortium to model either alkanes, cycloalkanes or aromatic hydrocarbons as carbon sources to study its...... structural resistance. The structural resistance was low, with changes in relative abundances of up to four orders of magnitude, depending on hydrocarbon type and bacterial taxon. This low resistance is explained by the presence of hydrocarbon-degrading specialists in the consortium and differences in growth...... kinetics on individual hydrocarbons. However, despite this low resistance, structural and functional resilience were high, as verified by re-exposing the hydrocarbon-perturbed consortium to diesel fuel. The high resilience is either due to the short exposure time, insufficient for permanent changes...

  15. Development of a geo-information system for the evaluation of active faults

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Sang Gi; Lee, G. B.; Kim, H. J. [Paichai Univ., Taejon (Korea, Republic of)] (and others)

    2002-03-15

    This project aims to assist the participants of the active fault project by computerizing the field and laboratory data of the study area. The geo-information system, therefore, not only contributes to the participants while they are organizing and analyzing their data but also gathers detailed information in digital form. A geological database can be established by organizing the gathered digital information from the participants, and the database can easily be sheared among specialists. In such purpose, a field system, which can be used by the project participants, has been attempting to be developed during the first project year. The field system contains not only a software but also available topographic and geological maps of the study area. The field system is coded by visual basic, and the mapobject component of ESRI and the TrueDBGrid OCX are also utilized. Major functions of the system are tools for vector and raster form topographic maps, database design and application, geological symbol plot, and the database search for the plotted geological symbol.

  16. Analytical evaluation of local fault in sodium cooled small fast reactor (4S). Preliminary evaluation of partial blockage in coolant channel

    International Nuclear Information System (INIS)

    Nishimura, Satoshi; Ueda, Nobuyuki

    2007-01-01

    Local faults are fuel failures that result from heat removal imbalance within a single subassembly especially in FBRs. Although the occurrence frequency of local faults is quite low, the licensing body required local faults evaluations in previous FBR plants to confirm the potential for the occurrence of severe fuel subassembly failure and its propagation. A conceptual design of 4S (Super-Safe, Small and Simple) is a sodium cooled fast reactor, which aims at an application to dispersed energy source and long core lifetime. It has a dense arrangement of fuel pins to achieve a long lifetime. Therefore, from the viewpoint of thermal hydraulics, the 4S reactor is considered to have more potential for coolant boiling and fuel pin failure caused by formation of local blockage, comparing these potential in the conventional FBRs. The objective of the present study is to evaluate the effect of local blockage on the coolant flow pattern and temperature rise in the 4S-type fuel subassembly under the normal operation condition. A series of three-dimensional thermal-hydraulic analysis in a single subassembly with local blockage was conducted by the commercialized CFD code 'PHOENICS'. Analytical results show that the peak coolant temperature behind the blockage rises with increasing the blockage area, however, the coolant boiling does not occur under the present analytical conditions. On the other hand, it is found that the liquid phase formation caused by eutectic reactions will occur between the metallic fuel and the cladding under the local blockage condition. However, the penetration rate of liquid phase at fuel-cladding interface is quit low. Therefore, it is expected that rapid fuel pin failure and its propagation to surrounding pins due to liquid phase formation will not occur. (author)

  17. Laboratory scale micro-seismic monitoring of rock faulting and injection-induced fault reactivation

    Science.gov (United States)

    Sarout, J.; Dautriat, J.; Esteban, L.; Lumley, D. E.; King, A.

    2017-12-01

    The South West Hub CCS project in Western Australia aims to evaluate the feasibility and impact of geosequestration of CO2 in the Lesueur sandstone formation. Part of this evaluation focuses on the feasibility and design of a robust passive seismic monitoring array. Micro-seismicity monitoring can be used to image the injected CO2plume, or any geomechanical fracture/fault activity; and thus serve as an early warning system by measuring low-level (unfelt) seismicity that may precede potentially larger (felt) earthquakes. This paper describes laboratory deformation experiments replicating typical field scenarios of fluid injection in faulted reservoirs. Two pairs of cylindrical core specimens were recovered from the Harvey-1 well at depths of 1924 m and 2508 m. In each specimen a fault is first generated at the in situ stress, pore pressure and temperature by increasing the vertical stress beyond the peak in a triaxial stress vessel at CSIRO's Geomechanics & Geophysics Lab. The faulted specimen is then stabilized by decreasing the vertical stress. The freshly formed fault is subsequently reactivated by brine injection and increase of the pore pressure until slip occurs again. This second slip event is then controlled in displacement and allowed to develop for a few millimeters. The micro-seismic (MS) response of the rock during the initial fracturing and subsequent reactivation is monitored using an array of 16 ultrasonic sensors attached to the specimen's surface. The recorded MS events are relocated in space and time, and correlate well with the 3D X-ray CT images of the specimen obtained post-mortem. The time evolution of the structural changes induced within the triaxial stress vessel is therefore reliably inferred. The recorded MS activity shows that, as expected, the increase of the vertical stress beyond the peak led to an inclined shear fault. The injection of fluid and the resulting increase in pore pressure led first to a reactivation of the pre

  18. Knowledge-driven board-level functional fault diagnosis

    CERN Document Server

    Ye, Fangming; Chakrabarty, Krishnendu; Gu, Xinli

    2017-01-01

    This book provides a comprehensive set of characterization, prediction, optimization, evaluation, and evolution techniques for a diagnosis system for fault isolation in large electronic systems. Readers with a background in electronics design or system engineering can use this book as a reference to derive insightful knowledge from data analysis and use this knowledge as guidance for designing reasoning-based diagnosis systems. Moreover, readers with a background in statistics or data analytics can use this book as a practical case study for adapting data mining and machine learning techniques to electronic system design and diagnosis. This book identifies the key challenges in reasoning-based, board-level diagnosis system design and presents the solutions and corresponding results that have emerged from leading-edge research in this domain. It covers topics ranging from highly accurate fault isolation, adaptive fault isolation, diagnosis-system robustness assessment, to system performance analysis and evalua...

  19. Evaluating the Possibility of a joint San Andreas-Imperial Fault Rupture in the Salton Trough Region

    Science.gov (United States)

    Kyriakopoulos, C.; Oglesby, D. D.; Meltzner, A. J.; Rockwell, T. K.

    2016-12-01

    A geodynamic investigation of possible earthquakes in a given region requires both field data and numerical simulations. In particular, the investigation of past earthquakes is also a fundamental part of understanding the earthquake potential of the Salton Trough region. Geological records from paleoseismic trenches inform us of past ruptures (length, magnitude, timing), while dynamic rupture models allow us to evaluate numerically the mechanics of such earthquakes. The two most recent events (Mw 6.4 1940 and Mw 6.9 1979) on the Imperial fault (IF) both ruptured up to the northern end of the mapped fault, giving the impression that rupture doesn't propagate further north. This result is supported by small displacements, 20 cm, measured at the Dogwood site near the end of the mapped rupture in each event. However, 3D paleoseismic data from the same site corresponding to the most recent pre-1940 event (1710 CE) and 5th (1635 CE) and 6th events back revealed up to 1.5 m of slip in those events. Since we expect the surface displacement to decrease toward the termination of a rupture, we postulate that in these earlier cases the rupture propagated further north than in 1940 or 1979. Furthermore, paleoseismic data from the Coachella site (Philibosian et al., 2011) on the San Andreas fault (SAF) indicates slip events ca. 1710 CE and 1588-1662 CE. In other words, the timing of two large paleoseismic displacements on the IF cannot be distinguished from the timing of the two most recent events on the southern SAF, leaving a question: is it possible to have through-going rupture in the Salton Trough? We investigate this question through 3D dynamic finite element rupture modeling. In our work, we considered two scenarios: rupture initiated on the IF propagating northward, and rupture initiated on the SAF propagating southward. Initial results show that, in the first case, rupture propagates north of the mapped northern terminus of the IF only under certain pre

  20. Thermal waters along the Konocti Bay fault zone, Lake County, California: a re-evaluation

    Science.gov (United States)

    Thompson, J.M.; Mariner, R.H.; White, L.D.; Presser, T.S.; Evans, William C.

    1992-01-01

    The Konocti Bay fault zone (KBFZ), initially regarded by some as a promising target for liquid-dominated geothermal systems, has been a disappointment. At least five exploratory wells were drilled in the vicinity of the KBFZ, but none were successful. Although the Na-K-Ca and Na-Li geothermometers indicate that the thermal waters discharging in the vicinity of Howard and Seigler Springs may have equilibrated at temperatures greater than 200??C, the spring temperatures and fluid discharges are low. Most thermal waters along the KBFZ contain >100 mg/l Mg. High concentrations of dissolved magnesium are usually indicative of relatively cool hydrothermal systems. Dissolution of serpentine at shallow depths may contribute dissolved silica and magnesium to rising thermal waters. Most thermal waters are saturated with respect to amorphous silica at the measured spring temperature. Silica geothermometers and mixing models are useless because the dissolved silica concentration is not controlled by the solubility of either quartz or chalcedony. Cation geothermometry indicates the possibility of a high-temperature fluid (> 200??C) only in the vicinity of Howard and Seigler Springs. However, even if the fluid temperature is as high as that indicated by the geothermometers, the permeability may be low. Deuterium and oxygen-18 values of the thermal waters indicate that they recharged locally and became enriched in oxygen-18 by exchange with rock. Diluting meteoric water and the thermal water appear to have the same deuterium value. Lack of tritium in the diluted spring waters suggest that the diluting water is old. ?? 1992.

  1. Application-Driven Reliability Measures and Evaluation Tool for Fault-Tolerant Real-Time Systems

    National Research Council Canada - National Science Library

    Krishna, C

    2001-01-01

    .... The measure combines graphic-theoretic concepts in evaluating the underlying reliability of the network and other means to evaluate the ability of the network to support interprocessor traffic...

  2. On evaluating the robustness of spatial-proximity-based regionalization methods

    Science.gov (United States)

    Lebecherel, Laure; Andréassian, Vazken; Perrin, Charles

    2016-08-01

    In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatial-proximity-based regionalization method will depend on the density of the available streamgauging network, and the purpose of this note is to discuss how to assess the robustness of the regionalization method (i.e., its resilience to an increasingly sparse hydrometric network). We compare two options: (i) the random hydrometrical reduction (HRand) method, which consists in sub-sampling the existing gauging network around the target ungauged station, and (ii) the hydrometrical desert method (HDes), which consists in ignoring the closest gauged stations. Our tests suggest that the HDes method should be preferred, because it provides a more realistic view on regionalization performance.

  3. Fault Detection and Isolation using Eigenstructure Assignment

    DEFF Research Database (Denmark)

    Jørgensen, R. B.; Patton, R.; Chen, J.

    1994-01-01

    The purpose of this article is to investigate the robustness to model uncertainties of observer based fault detection and isolation. The approach is designed with a straight forward dynamic nad the observer.......The purpose of this article is to investigate the robustness to model uncertainties of observer based fault detection and isolation. The approach is designed with a straight forward dynamic nad the observer....

  4. The constant failure rate model for fault tree evaluation as a tool for unit protection reliability assessment

    International Nuclear Information System (INIS)

    Vichev, S.; Bogdanov, D.

    2000-01-01

    The purpose of this paper is to introduce the fault tree analysis method as a tool for unit protection reliability estimation. The constant failure rate model applies for making reliability assessment, and especially availability assessment. For that purpose an example for unit primary equipment structure and fault tree example for simplified unit protection system is presented (author)

  5. Confronting Oahu's Water Woes: Identifying Scenarios for a Robust Evaluation of Policy Alternatives

    Science.gov (United States)

    van Rees, C. B.; Garcia, M. E.; Alarcon, T.; Sixt, G.

    2013-12-01

    three primary drivers of sustainability of the water supply: demand, recharge, and sea level rise. We then determined the secondary drivers shaping the primary drivers and separated them into two groups: policy-relevant drivers and external drivers. We developed a simple water balance model to calculate maximum sustainable yield based on soil properties, land cover, daily precipitation and temperature. To identify critical scenarios, the model was run over the full forecasted ranges of external drivers, such as temperature, precipitation, sea level, and population. Only the status quo of the policy drivers such as land use, water use per capita, and habitat protection has been modeled to date. However, our next steps include working with stakeholders to elicit policy strategies such as conservation regulations or zoning plans, and testing the robustness of proposed strategies with the model developed.

  6. A fuzzy-based reliability approach to evaluate basic events of fault tree analysis for nuclear power plant probabilistic safety assessment

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry

    2014-01-01

    Highlights: • We propose a fuzzy-based reliability approach to evaluate basic event reliabilities. • It implements the concepts of failure possibilities and fuzzy sets. • Experts evaluate basic event failure possibilities using qualitative words. • Triangular fuzzy numbers mathematically represent qualitative failure possibilities. • It is a very good alternative for conventional reliability approach. - Abstract: Fault tree analysis has been widely utilized as a tool for nuclear power plant probabilistic safety assessment. This analysis can be completed only if all basic events of the system fault tree have their quantitative failure rates or failure probabilities. However, it is difficult to obtain those failure data due to insufficient data, environment changing or new components. This study proposes a fuzzy-based reliability approach to evaluate basic events of system fault trees whose failure precise probability distributions of their lifetime to failures are not available. It applies the concept of failure possibilities to qualitatively evaluate basic events and the concept of fuzzy sets to quantitatively represent the corresponding failure possibilities. To demonstrate the feasibility and the effectiveness of the proposed approach, the actual basic event failure probabilities collected from the operational experiences of the David–Besse design of the Babcock and Wilcox reactor protection system fault tree are used to benchmark the failure probabilities generated by the proposed approach. The results confirm that the proposed fuzzy-based reliability approach arises as a suitable alternative for the conventional probabilistic reliability approach when basic events do not have the corresponding quantitative historical failure data for determining their reliability characteristics. Hence, it overcomes the limitation of the conventional fault tree analysis for nuclear power plant probabilistic safety assessment

  7. Advanced cloud fault tolerance system

    Science.gov (United States)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  8. Fast Adaptive Least Trimmed Squares for Robust Evaluation of Quality of Experience

    Science.gov (United States)

    2014-07-01

    fact that not every Internet user is trustworthy . In other words, due to the lack of supervision when subjects perform experiments in crowdsourcing, they...21], [22], etc. However, a major challenge of crowdsourcing QoE evaluation is that not every Internet user is trustworthy . That is, some raters try...regularization paths of the LASSO problem could provide us an order on samples tending to be outliers. Such an approach is inspired by Huber’s celebrated work on

  9. Evaluation of common mode failure of safety functions for limiting fault events

    International Nuclear Information System (INIS)

    Rezendes, J.P.; Hyde, A.W.

    2004-01-01

    The draft U.S. Nuclear Regulatory Commission (NRC) policy on digital protection system software requires all Advanced Light Water Reactors (ALWRs) to be evaluated assuming a hypothetical common mode failure (CMF) which incapacitates the normal automatic initiation of safety functions. The System 80 + ALWR has been evaluated for such hypothetical conditions. The results show that the diverse automatic and manual protective systems in System 80 + provide ample safety performance margins relative to core coolability, offsite radiological releases. Reactor Coolant System (RCS) pressurization and containment integrity. This deterministic evaluation served to quantify the significant inherent safety margins in the System 80 + Standard Plant design even in the event of this extremely low probability scenario of a common mode failure. (author)

  10. Distributed bearing fault diagnosis based on vibration analysis

    Science.gov (United States)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  11. Fault detection in finite frequency domain for Takagi-Sugeno fuzzy systems with sensor faults.

    Science.gov (United States)

    Li, Xiao-Jian; Yang, Guang-Hong

    2014-08-01

    This paper is concerned with the fault detection (FD) problem in finite frequency domain for continuous-time Takagi-Sugeno fuzzy systems with sensor faults. Some finite-frequency performance indices are initially introduced to measure the fault/reference input sensitivity and disturbance robustness. Based on these performance indices, an effective FD scheme is then presented such that the generated residual is designed to be sensitive to both fault and reference input for faulty cases, while robust against the reference input for fault-free case. As the additional reference input sensitivity for faulty cases is considered, it is shown that the proposed method improves the existing FD techniques and achieves a better FD performance. The theory is supported by simulation results related to the detection of sensor faults in a tunnel-diode circuit.

  12. Fault detection for discrete-time switched systems with sensor stuck faults and servo inputs.

    Science.gov (United States)

    Zhong, Guang-Xin; Yang, Guang-Hong

    2015-09-01

    This paper addresses the fault detection problem of switched systems with servo inputs and sensor stuck faults. The attention is focused on designing a switching law and its associated fault detection filters (FDFs). The proposed switching law uses only the current states of FDFs, which guarantees the residuals are sensitive to the servo inputs with known frequency ranges in faulty cases and robust against them in fault-free case. Thus, the arbitrarily small sensor stuck faults, including outage faults can be detected in finite-frequency domain. The levels of sensitivity and robustness are measured in terms of the finite-frequency H- index and l2-gain. Finally, the switching law and FDFs are obtained by the solution of a convex optimization problem. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Evaluate the application of modal test and analysis processes to structural fault detection in MSFC-STS project elements

    Science.gov (United States)

    Springer, William T.

    1988-01-01

    The Space Transportation System (STS) is a very complex and expensive flight system which is intended to carry payloads into low Earth orbit and return. A catastrophic failure of the STS (such as experienced in the 51-L incident) results in the loss of both human life as well as very expensive hardware. One impact of this incident was to reaffirm the need to do everything possible to insure the integrity and reliability of the STS is sufficient to produce a safe flight. One means of achieving this goal is to expand the number of inspection technologies available for use on the STS. The purpose was to begin to evaluate the possible use of assessing the structural integrity of STS components for which Marshall Space Flight Center (MSFC) has responsibility. This entailed reviewing the available literature and determining a low-level experimental program which could be performed by MSFC and would help establish the feasibility of using this technology for structural fault detection.

  14. Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.

    Science.gov (United States)

    Li, Ben; Stenstrom, M K

    2014-01-01

    One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.

  15. A method for evaluation of proton plan robustness towards inter-fractional motion applied to pelvic lymph node irradiation

    DEFF Research Database (Denmark)

    Andersen, Andreas G; Casares-Magaz, Oscar; Muren, Ludvig P

    2015-01-01

    of the pelvic lymph nodes (LNs) from different beam angles. Patient- versus population-specific patterns in dose deterioration were explored. MATERIAL AND METHODS: Patient data sets consisting of a planning computed tomography (pCT) as well as multiple repeat CT (rCT) scans for three patients were used......BACKGROUND: The benefit of proton therapy may be jeopardized by dose deterioration caused by water equivalent path length (WEPL) variations. In this study we introduced a method to evaluate robustness of proton therapy with respect to inter-fractional motion and applied it to irradiation...... in deterioration patterns were found for the investigated patients, with beam angles delivering less dose to rectum, bladder and overall normal tissue identified around 40° and around 150°-160° for the left LNs, and corresponding angles for the right LNs. These angles were also associated with low values of WEPL...

  16. A numerical study on the feasibility evaluation of a hybrid type superconducting fault current limiter applying thyristors

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Seok Ho; Lee, Woo Seung; Lee, Ji Ho; Hwang, Young Jin; Ko, Tae Kuk [Yonsei University, Seoul (Korea, Republic of)

    2013-12-15

    Smart fault current controller (SFCC) proposed in our previous work consists of a power converter, a high temperature superconducting (HTS) DC reactor, thyristors, and a control unit [1]. SFCC can limit and control the current by adjusting firing angles of thyristors when a fault occurs. SFCC has complex structure because the HTS DC reactor generates the loss under AC. To use the DC reactor under AC, rectifier that consists of four thyristors is needed and it increases internal resistance of SFCC. For this reason, authors propose a hybrid type superconducting fault current limiter (SFCL). The hybrid type SFCL proposed in this paper consists of a non-inductive superconducting coil and two thyristors. To verify the feasibility of the proposed hybrid type SFCL, simulations about the interaction of the superconducting coil and thyristors are conducted when fault current flows in the superconducting coil. Authors expect that the hybrid type SFCL can control the magnitude of the fault current by adjusting the firing angles of thyristors after the superconducting coil limits the fault current at first peak.

  17. A numerical study on the feasibility evaluation of a hybrid type superconducting fault current limiter applying thyristors

    International Nuclear Information System (INIS)

    Nam, Seok Ho; Lee, Woo Seung; Lee, Ji Ho; Hwang, Young Jin; Ko, Tae Kuk

    2013-01-01

    Smart fault current controller (SFCC) proposed in our previous work consists of a power converter, a high temperature superconducting (HTS) DC reactor, thyristors, and a control unit [1]. SFCC can limit and control the current by adjusting firing angles of thyristors when a fault occurs. SFCC has complex structure because the HTS DC reactor generates the loss under AC. To use the DC reactor under AC, rectifier that consists of four thyristors is needed and it increases internal resistance of SFCC. For this reason, authors propose a hybrid type superconducting fault current limiter (SFCL). The hybrid type SFCL proposed in this paper consists of a non-inductive superconducting coil and two thyristors. To verify the feasibility of the proposed hybrid type SFCL, simulations about the interaction of the superconducting coil and thyristors are conducted when fault current flows in the superconducting coil. Authors expect that the hybrid type SFCL can control the magnitude of the fault current by adjusting the firing angles of thyristors after the superconducting coil limits the fault current at first peak.

  18. An empirical evaluation of classification algorithms for fault prediction in open source projects

    Directory of Open Access Journals (Sweden)

    Arvinder Kaur

    2018-01-01

    Full Text Available Creating software with high quality has become difficult these days with the fact that size and complexity of the developed software is high. Predicting the quality of software in early phases helps to reduce testing resources. Various statistical and machine learning techniques are used for prediction of the quality of the software. In this paper, six machine learning models have been used for software quality prediction on five open source software. Varieties of metrics have been evaluated for the software including C & K, Henderson & Sellers, McCabe etc. Results show that Random Forest and Bagging produce good results while Naïve Bayes is least preferable for prediction.

  19. Card sorting to evaluate the robustness of the information architecture of a protocol website.

    Science.gov (United States)

    Wentzel, J; Müller, F; Beerlage-de Jong, N; van Gemert-Pijnen, J

    2016-02-01

    A website on Methicillin-Resistant Staphylococcus Aureus, MRSA-net, was developed for Health Care Workers (HCWs) and the general public, in German and in Dutch. The website's content was based on existing protocols and its structure was based on a card sort study. A Human Centered Design approach was applied to ensure a match between user and technology. In the current study we assess whether the website's structure still matches user needs, again via a card sort study. An open card sort study was conducted. Randomly drawn samples of 100 on-site search queries as they were entered on the MRSA-net website (during one year of use) were used as card input. In individual sessions, the cards were sorted by each participant (18 German and 10 Dutch HCWs, and 10 German and 10 Dutch members of the general public) into piles that were meaningful to them. Each participant provided a label for every pile of cards they created. Cluster analysis was performed on the resulting sorts, creating an overview of clusters of items placed together in one pile most frequently. In addition, pile labels were qualitatively analyzed to identify the participants' mental models. Cluster analysis confirmed existing categories and revealed new themes emerging from the search query samples, such as financial issues and consequences for the patient. Even though MRSA-net addresses these topics, they are not prominently covered in the menu structure. The label analysis shows that 7 of a total of 44 MRSA-net categories were not reproduced by the participants. Additional themes such as information on other pathogens and categories such as legal issues emerged. This study shows that the card sort performed to create MRSA-net resulted in overall long-lasting structure and categories. New categories were identified, indicating that additional information needs emerged. Therefore, evaluating website structure should be a recurrent activity. Card sorting with ecological data as input for the cards is

  20. The hydraulic structure of the Gole Larghe Fault Zone (Italian Southern Alps) through the seismic cycle

    Science.gov (United States)

    Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.

    2017-12-01

    The 600 m-thick, strike slip Gole Larghe Fault Zone (GLFZ) experienced several hundred seismic slip events at c. 8 km depth, well-documented by numerous pseudotachylytes, was then exhumed and is now exposed in beautiful and very continuous outcrops. The fault zone was also characterized by hydrous fluid flow during the seismic cycle, demonstrated by alteration halos and precipitation of hydrothermal minerals in veins and cataclasites. We have characterized the GLFZ with > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed us obtaining 3D Discrete Fracture Network (DFN) models, based on robust probability density functions for parameters of fault and fracture sets, and simulating the fault zone hydraulic properties. In addition, the correlation between evidences of fluid flow and the fault/fracture network parameters have been studied with a geostatistical approach, allowing generating more realistic time-varying permeability models of the fault zone. Based on this dataset, we have developed a FEM hydraulic model of the GLFZ for a period of some tens of years, covering one seismic event and a postseismic period. The higher permeability is attained in the syn- to early post-seismic period, when fractures are (re)opened by off-fault deformation, then permeability decreases in the postseismic due to fracture sealing. The flow model yields a flow pattern consistent with the observed alteration/mineralization pattern and a marked channelling of fluid flow in the inner part of the fault zone, due to permeability anisotropy related to the spatial arrangement of different fracture sets. Amongst possible seismological applications of our study, we will discuss the possibility to evaluate the coseismic fracture intensity due to off-fault damage, and the heterogeneity and evolution of mechanical parameters due to fluid-rock interaction.

  1. The IronChip evaluation package: a package of perl modules for robust analysis of custom microarrays

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2010-03-01

    Full Text Available Abstract Background Gene expression studies greatly contribute to our understanding of complex relationships in gene regulatory networks. However, the complexity of array design, production and manipulations are limiting factors, affecting data quality. The use of customized DNA microarrays improves overall data quality in many situations, however, only if for these specifically designed microarrays analysis tools are available. Results The IronChip Evaluation Package (ICEP is a collection of Perl utilities and an easy to use data evaluation pipeline for the analysis of microarray data with a focus on data quality of custom-designed microarrays. The package has been developed for the statistical and bioinformatical analysis of the custom cDNA microarray IronChip but can be easily adapted for other cDNA or oligonucleotide-based designed microarray platforms. ICEP uses decision tree-based algorithms to assign quality flags and performs robust analysis based on chip design properties regarding multiple repetitions, ratio cut-off, background and negative controls. Conclusions ICEP is a stand-alone Windows application to obtain optimal data quality from custom-designed microarrays and is freely available here (see "Additional Files" section and at: http://www.alice-dsl.net/evgeniy.vainshtein/ICEP/

  2. SDEM modelling of fault-propagation folding

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Poulsen, Jane Bang

    2009-01-01

    and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset......Understanding the dynamics and kinematics of fault-propagation-folding is important for evaluating the associated hydrocarbon play, for accomplishing reliable section balancing (structural reconstruction), and for assessing seismic hazards. Accordingly, the deformation style of fault-propagation...... a precise indication of when faults develop and hence also the sequential evolution of secondary faults. Here we focus on the generation of a fault -propagated fold with a reverse sense of motion at the master fault, and varying only the dip of the master fault and the mechanical behaviour of the deformed...

  3. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  4. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung; Lee, Jong Min [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Kim, Kil Joong [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Department of Radiation Applied Life Science, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Institute of Radiation Medicine, Seoul National University Medical Research Center, and Clinical Research Institute, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 110-744 (Korea, Republic of); Kim, Tae Ki [Medical Information Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of)

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique was developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.

  5. Design and Evaluation of a Protection Relay for a Wind Generator Based on the Positive- and Negative-Sequence Fault Components

    DEFF Research Database (Denmark)

    Zheng, T. Y.; Cha, Seung-Tae; Crossley, P. A.

    2013-01-01

    To avoid undesirable disconnection of healthy wind generators (WGs) or a wind power plant, a WG protection relay should discriminate among faults, so that it can operate instantaneously for WG, connected feeder or connection bus faults, it can operate after a delay for inter-tie or grid faults......, and it can avoid operating for parallel WG or adjacent feeder faults. A WG protection relay based on the positive- and negativesequence fault components is proposed in the paper. At stage 1, the proposed relay uses the magnitude of the positive-sequence component in the fault current to distinguish faults...... at a parallel WG connected to the same feeder or at an adjacent feeder, from other faults at a connected feeder, an inter-tie, or a grid. At stage 2, the fault type is first determined using the relationships between the positive- and negative-sequence fault components. Then, the relay differentiates between...

  6. Critique of the use of ICRP-29's 'Robustness Index' in evaluating uncertainties associated with radiological assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F O; Schwarz, G; Killough, G G [Oak Ridge National Lab., TN (USA)

    1980-08-01

    Concern is expressed regarding the use of the robustness index, as proposed in ICRP 29, to characterise the uncertainties associated with a model's predictions. Results of a Monte Carlo simulation employing a model of the grass-cow-milk-infant pathway for /sup 131/I are used to elucidate the author's criticisms. It is recommended that the robustness index should be carefully examined to appraise its possible usefulness and potential dangers. Alternate methods for analysis of uncertainty are proposed.

  7. SU-E-T-266: Proton PBS Plan Design and Robustness Evaluation for Head and Neck Cancers

    International Nuclear Information System (INIS)

    Liang, X; Tang, S; Zhai, H; Kirk, M; Kalbasi, A; Lin, A; Ahn, P; Tochner, Z; McDonough, J; Both, S

    2014-01-01

    Purpose: To describe a newly designed proton pencil beam scanning (PBS) planning technique for radiotherapy of patients with bilateral oropharyngeal cancer, and to assess plan robustness. Methods: We treated 10 patients with proton PBS plans using 2 posterior oblique field (2F PBS) comprised of 80% single-field uniform dose (SFUD) and 20% intensity-modulated proton therapy (IMPT). All patients underwent weekly CT scans for verification. Using dosimetric indicators for both targets and organs at risk (OARs), we quantitatively compared initial plans and verification plans using student t-tests. We created a second proton PBS plan for each patient using 2 posterior oblique plus 1 anterior field comprised of 100% SFUD (3F PBS). We assessed plan robustness for both proton plan groups, as well as a photon volumetric modulated arc therapy (VMAT) plan group by comparing initial and verification plans. Results: The 2F PBS plans were not robust in target coverage. D98% for clinical target volume (CTV) degraded from 100% to 96% on average, with maximum change Δ D98% of −24%. Two patients were moved to photon VMAT treatment due to insufficient CTV coverage on verification plans. Plan robustness was especially weak in the low-anterior neck. The 3F PBS plans, however, demonstrated robust target coverage, which was comparable to the VMAT photon plan group. Doses to oral cavity were lower in the Proton PBS plans compared to photon VMAT plans due to no lower exit dose to the oral cavity. Conclusion: Proton PBS plans using 2 posterior oblique fields were not robust for CTV coverage, due to variable positioning of redundant soft tissue in the posterior neck. We designed 3-field proton PBS plans using an anterior field to avoid long heterogeneous paths in the low neck. These 3-field proton PBS plans had significantly improved plan robustness, and the robustness is comparable to VMAT photon plans

  8. Application of a New Robust ECG T-Wave Delineation Algorithm for the Evaluation of the Autonomic Innervation of the Myocardium

    DEFF Research Database (Denmark)

    Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte

    2016-01-01

    T-wave amplitude (TWA) is a well know index of the autonomic innervation of the myocardium. However, until now it has been evaluated only manually or with simple and inefficient algorithms. In this paper, we developed a new robust single-lead electrocardiogram (ECG) T-wave delineation algorithm...

  9. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  10. A Novel Data Hierarchical Fusion Method for Gas Turbine Engine Performance Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Feng Lu

    2016-10-01

    Full Text Available Gas path fault diagnosis involves the effective utilization of condition-based sensor signals along engine gas path to accurately identify engine performance failure. The rapid development of information processing technology has led to the use of multiple-source information fusion for fault diagnostics. Numerous efforts have been paid to develop data-based fusion methods, such as neural networks fusion, while little research has focused on fusion architecture or the fusion of different method kinds. In this paper, a data hierarchical fusion using improved weighted Dempster–Shaffer evidence theory (WDS is proposed, and the integration of data-based and model-based methods is presented for engine gas-path fault diagnosis. For the purpose of simplifying learning machine typology, a recursive reduced kernel based extreme learning machine (RR-KELM is developed to produce the fault probability, which is considered as the data-based evidence. Meanwhile, the model-based evidence is achieved using particle filter-fuzzy logic algorithm (PF-FL by engine health estimation and component fault location in feature level. The outputs of two evidences are integrated using WDS evidence theory in decision level to reach a final recognition decision of gas-path fault pattern. The characteristics and advantages of two evidences are analyzed and used as guidelines for data hierarchical fusion framework. Our goal is that the proposed methodology provides much better performance of gas-path fault diagnosis compared to solely relying on data-based or model-based method. The hierarchical fusion framework is evaluated in terms to fault diagnosis accuracy and robustness through a case study involving fault mode dataset of a turbofan engine that is generated by the general gas turbine simulation. These applications confirm the effectiveness and usefulness of the proposed approach.

  11. Model-based fault diagnosis approach on external short circuit of lithium-ion battery used in electric vehicles

    International Nuclear Information System (INIS)

    Chen, Zeyu; Xiong, Rui; Tian, Jinpeng; Shang, Xiong; Lu, Jiahuan

    2016-01-01

    Highlights: • The characteristics of ESC fault of lithium-ion battery are investigated experimentally. • The proposed method to simulate the electrical behavior of ESC fault is viable. • Ten parameters in the presented fault model were optimized using a DPSO algorithm. • A two-layer model-based fault diagnosis approach for battery ESC is proposed. • The effective and robustness of the proposed algorithm has been evaluated. - Abstract: This study investigates the external short circuit (ESC) fault characteristics of lithium-ion battery experimentally. An experiment platform is established and the ESC tests are implemented on ten 18650-type lithium cells considering different state-of-charges (SOCs). Based on the experiment results, several efforts have been made. (1) The ESC process can be divided into two periods and the electrical and thermal behaviors within these two periods are analyzed. (2) A modified first-order RC model is employed to simulate the electrical behavior of the lithium cell in the ESC fault process. The model parameters are re-identified by a dynamic-neighborhood particle swarm optimization algorithm. (3) A two-layer model-based ESC fault diagnosis algorithm is proposed. The first layer conducts preliminary fault detection and the second layer gives a precise model-based diagnosis. Four new cells are short-circuited to evaluate the proposed algorithm. It shows that the ESC fault can be diagnosed within 5 s, the error between the model and measured data is less than 0.36 V. The effectiveness of the fault diagnosis algorithm is not sensitive to the precision of battery SOC. The proposed algorithm can still make the correct diagnosis even if there is 10% error in SOC estimation.

  12. Model based Fault Detection and Isolation for Driving Motors of a Ground Vehicle

    Directory of Open Access Journals (Sweden)

    Young-Joon Kim

    2016-04-01

    Full Text Available This paper proposes model based current sensor and position sensor fault detection and isolation algorithm for driving motor of In-wheel independent drive electric vehicle. From low level perspective, fault diagnosis conducted and analyzed to enhance robustness and stability. Composing state equation of interior permanent magnet synchronous motor (IPMSM, current sensor fault and position sensor fault diagnosed with parity equation. Validation and usefulness of algorithm confirmed based on IPMSM fault occurrence simulation data.

  13. Rolling element bearing fault diagnosis based on Over-Complete rational dilation wavelet transform and auto-correlation of analytic energy operator

    Science.gov (United States)

    Singh, Jaskaran; Darpe, A. K.; Singh, S. P.

    2018-02-01

    Local damage in rolling element bearings usually generates periodic impulses in vibration signals. The severity, repetition frequency and the fault excited resonance zone by these impulses are the key indicators for diagnosing bearing faults. In this paper, a methodology based on over complete rational dilation wavelet transform (ORDWT) is proposed, as it enjoys a good shift invariance. ORDWT offers flexibility in partitioning the frequency spectrum to generate a number of subbands (filters) with diverse bandwidths. The selection of the optimal filter that perfectly overlaps with the bearing fault excited resonance zone is based on the maximization of a proposed impulse detection measure "Temporal energy operated auto correlated kurtosis". The proposed indicator is robust and consistent in evaluating the impulsiveness of fault signals in presence of interfering vibration such as heavy background noise or sporadic shocks unrelated to the fault or normal operation. The structure of the proposed indicator enables it to be sensitive to fault severity. For enhanced fault classification, an autocorrelation of the energy time series of the signal filtered through the optimal subband is proposed. The application of the proposed methodology is validated on simulated and experimental data. The study shows that the performance of the proposed technique is more robust and consistent in comparison to the original fast kurtogram and wavelet kurtogram.

  14. Efficiency Evaluation of Five-Phase Outer-Rotor Fault-Tolerant BLDC Drives under Healthy and Open-Circuit Faulty Conditions

    Directory of Open Access Journals (Sweden)

    ARASHLOO, R. S.

    2014-05-01

    Full Text Available Fault tolerant motor drives are an interesting subject for many applications such as automotive industries and wind power generation. Among different configurations of these systems, five-phase BLDC drives are gaining more importance which is because of their compactness and high efficiency. Due to replacement of field windings by permanent magnets in their rotor structure, the main sources of power losses in these drives are iron (core losses, copper (winding losses, and inverter unit (semiconductor losses. Although low amplitude of power losses in five-phase BLDC drives is an important aspect for many applications, but their efficiency under faulty conditions is not considered in previous studies. In this paper, the efficiency of an outer-rotor five phase BLDC drive is evaluated under normal and different faulty conditions. Open-circuit fault is considered for one, two adjacent and two non-adjacent faulty phases. Iron core losses are calculated via FEM simulations in Flux-Cedrat software, and moreover, inverter losses and winding copper losses are simulated in MATLAB� environment. Experimental evaluations are conducted to evaluate the efficiency of the entire BLDC drive which verifies the theoretical developments.

  15. Fault size classification of rotating machinery using support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y. S.; Lee, D. H.; Park, S. K. [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2012-03-15

    Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults.

  16. Fault size classification of rotating machinery using support vector machine

    International Nuclear Information System (INIS)

    Kim, Y. S.; Lee, D. H.; Park, S. K.

    2012-01-01

    Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults

  17. An adaptive deep convolutional neural network for rolling bearing fault diagnosis

    International Nuclear Information System (INIS)

    Fuan, Wang; Hongkai, Jiang; Haidong, Shao; Wenjing, Duan; Shuaipeng, Wu

    2017-01-01

    The working conditions of rolling bearings usually is very complex, which makes it difficult to diagnose rolling bearing faults. In this paper, a novel method called the adaptive deep convolutional neural network (CNN) is proposed for rolling bearing fault diagnosis. Firstly, to get rid of manual feature extraction, the deep CNN model is initialized for automatic feature learning. Secondly, to adapt to different signal characteristics, the main parameters of the deep CNN model are determined with a particle swarm optimization method. Thirdly, to evaluate the feature learning ability of the proposed method, t-distributed stochastic neighbor embedding (t-SNE) is further adopted to visualize the hierarchical feature learning process. The proposed method is applied to diagnose rolling bearing faults, and the results confirm that the proposed method is more effective and robust than other intelligent methods. (paper)

  18. Late Quaternary offset of alluvial fan surfaces along the Central Sierra Madre Fault, southern California

    Science.gov (United States)

    Burgette, Reed J.; Hanson, Austin; Scharer, Katherine M.; Midttun, Nikolas

    2016-01-01

    The Sierra Madre Fault is a reverse fault system along the southern flank of the San Gabriel Mountains near Los Angeles, California. This study focuses on the Central Sierra Madre Fault (CSMF) in an effort to provide numeric dating on surfaces with ages previously estimated from soil development alone. We have refined previous geomorphic mapping conducted in the western portion of the CSMF near Pasadena, CA, with the aid of new lidar data. This progress report focuses on our geochronology strategy employed in collecting samples and interpreting data to determine a robust suite of terrace surface ages. Sample sites for terrestrial cosmogenic nuclide and luminescence dating techniques were selected to be redundant and to be validated through relative geomorphic relationships between inset terrace levels. Additional sample sites were selected to evaluate the post-abandonment histories of terrace surfaces. We will combine lidar-derived displacement data with surface ages to estimate slip rates for the CSMF.

  19. Determination of radon concentration in drinking water resources of villages nearby Lalehzar fault and evaluation the annual effective dose

    International Nuclear Information System (INIS)

    Mohammad Malakootian; Zahra Darabi Fard; Mojtaba Rahimi

    2015-01-01

    The radon concentration has been measured in 44 drinking water resources, in villages nearby Lalehzar fault in winter 2014. Some samples showed a higher concentration of radon surpassing limit set by EPA. Further, a sample was taken from water distribution networks for these sources of water. Soluble radon concentration was measured by RAD7 device. Range radon concentration was 26.88 and 0.74 BqL -1 respectively. The maximum and minimum annual effective dose for adults was estimated at 52.7 and 2.29 µSvY -1 , respectively. Reducing radon from water before use is recommended to improve public health. (author)

  20. Robustness of IPTV business models

    NARCIS (Netherlands)

    Bouwman, H.; Zhengjia, M.; Duin, P. van der; Limonard, S.

    2008-01-01

    The final stage in the STOF method is an evaluation of the robustness of the design, for which the method provides some guidelines. For many innovative services, the future holds numerous uncertainties, which makes evaluating the robustness of a business model a difficult task. In this chapter, we

  1. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

  2. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  3. Preliminary Studies Concerning the Evaluation of Road Network Robustness for Iaşi National Roads Department

    OpenAIRE

    Cozar, Alexandru; Horobeţ, Iulian

    2011-01-01

    Structural robustness in road field is a new concept, very little addressed in specialized literature, both in Romania and abroad. Natural and weather phenomena that occur more frequently are more destructive than ever, endanger normal activities and can wreck road networks, causing significant damages (Grecu, 2005). These phenomena can be diverse: earthquakes, volcanic eruptions, tsunami, landslides, storms, floods, droughts, fire or avalanches. Our country is also affected by natural dis...

  4. Fault Diagnosis in Deaerator Using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    S Srinivasan

    2007-01-01

    Full Text Available In this paper a fuzzy logic based fault diagnosis system for a deaerator in a power plant unit is presented. The system parameters are obtained using the linearised state space deaerator model. The fuzzy inference system is created and rule base are evaluated relating the parameters to the type and severity of the faults. These rules are fired for specific changes in system parameters and the faults are diagnosed.

  5. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    Science.gov (United States)

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  6. Active Fault-Tolerant Control for Wind Turbine with Simultaneous Actuator and Sensor Faults

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available The purpose of this paper is to show a novel fault-tolerant tracking control (FTC strategy with robust fault estimation and compensating for simultaneous actuator sensor faults. Based on the framework of fault-tolerant control, developing an FTC design method for wind turbines is a challenge and, thus, they can tolerate simultaneous pitch actuator and pitch sensor faults having bounded first time derivatives. The paper’s key contribution is proposing a descriptor sliding mode method, in which for establishing a novel augmented descriptor system, with which we can estimate the state of system and reconstruct fault by designing descriptor sliding mode observer, the paper introduces an auxiliary descriptor state vector composed by a system state vector, actuator fault vector, and sensor fault vector. By the optimized method of LMI, the conditions for stability that estimated error dynamics are set up to promote the determination of the parameters designed. With this estimation, and designing a fault-tolerant controller, the system’s stability can be maintained. The effectiveness of the design strategy is verified by implementing the controller in the National Renewable Energy Laboratory’s 5-MW nonlinear, high-fidelity wind turbine model (FAST and simulating it in MATLAB/Simulink.

  7. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  8. Fault attacks, injection techniques and tools for simulation

    NARCIS (Netherlands)

    Piscitelli, R.; Bhasin, S.; Regazzoni, F.

    2015-01-01

    Faults attacks are a serious threat to secure devices, because they are powerful and they can be performed with extremely cheap equipment. Resistance against fault attacks is often evaluated directly on the manufactured devices, as commercial tools supporting fault evaluation do not usually provide

  9. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    Science.gov (United States)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  10. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

    International Nuclear Information System (INIS)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C.

    2004-01-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis

  11. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  12. Real-time fault-tolerant moving horizon air data estimation for the RECONFIGURE benchmark

    NARCIS (Netherlands)

    Wan, Y.; Keviczky, T.

    2018-01-01

    This paper proposes a real-time fault-tolerant estimation approach for combined sensor fault diagnosis and air data reconstruction. Due to simultaneous influence of winds and latent faults on monitored sensors, it is challenging to address the tradeoff between robustness to wind disturbances and

  13. Assessment on the influence of resistive superconducting fault current limiter in VSC-HVDC system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong-Geon; Khan, Umer Amir; Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Park, Byung-Bae; Lee, Bang-Wook, E-mail: bangwook@hanyang.ac.kr

    2014-09-15

    Highlights: • The role of SFCLs in VSC-HVDC systems was evaluated. • Simulation model based on Korea Jeju-Haenam HVDC power system was designed. • An effect and the feasible locations of resistive SFCLs were evaluated. • DC line-to-line, DC line-to-ground and 3 phase AC faults were imposed and analyzed. - Abstract: Due to fewer risk of commutation failures, harmonic occurrences and reactive power consumptions, Voltage Source Converter (VSC) based HVDC system is known as the optimum solution of HVDC power system for the future power grid. However, the absence of suitable fault protection devices for HVDC system hinders the efficient VSC-HVDC power grid design. In order to enhance the reliability of the VSC-HVDC power grid against the fault current problems, the application of resistive Superconducting Fault Current Limiters (SFCLs) could be considered. Also, SFCLs could be applied to the VSC-HVDC system with integrated AC Power Systems in order to enhance the transient response and the robustness of the system. In this paper, in order to evaluate the role of SFCLs in VSC-HVDC systems and to determine the suitable position of SFCLs in VSC-HVDC power systems integrated with AC power System, a simulation model based on Korea Jeju-Haenam HVDC power system was designed in Matlab Simulink/SimPowerSystems. This designed model was composed of VSC-HVDC system connected with an AC microgrid. Utilizing the designed VSC-HVDC systems, the feasible locations of resistive SFCLs were evaluated when DC line-to-line, DC line-to-ground and three phase AC faults were occurred. Consequently, it was found that the simulation model was effective to evaluate the positive effects of resistive SFCLs for the effective suppression of fault currents in VSC-HVDC systems as well as in integrated AC Systems. Finally, the optimum locations of SFCLs in VSC-HVDC transmission systems were suggested based on the simulation results.

  14. Assessment on the influence of resistive superconducting fault current limiter in VSC-HVDC system

    International Nuclear Information System (INIS)

    Lee, Jong-Geon; Khan, Umer Amir; Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Park, Byung-Bae; Lee, Bang-Wook

    2014-01-01

    Highlights: • The role of SFCLs in VSC-HVDC systems was evaluated. • Simulation model based on Korea Jeju-Haenam HVDC power system was designed. • An effect and the feasible locations of resistive SFCLs were evaluated. • DC line-to-line, DC line-to-ground and 3 phase AC faults were imposed and analyzed. - Abstract: Due to fewer risk of commutation failures, harmonic occurrences and reactive power consumptions, Voltage Source Converter (VSC) based HVDC system is known as the optimum solution of HVDC power system for the future power grid. However, the absence of suitable fault protection devices for HVDC system hinders the efficient VSC-HVDC power grid design. In order to enhance the reliability of the VSC-HVDC power grid against the fault current problems, the application of resistive Superconducting Fault Current Limiters (SFCLs) could be considered. Also, SFCLs could be applied to the VSC-HVDC system with integrated AC Power Systems in order to enhance the transient response and the robustness of the system. In this paper, in order to evaluate the role of SFCLs in VSC-HVDC systems and to determine the suitable position of SFCLs in VSC-HVDC power systems integrated with AC power System, a simulation model based on Korea Jeju-Haenam HVDC power system was designed in Matlab Simulink/SimPowerSystems. This designed model was composed of VSC-HVDC system connected with an AC microgrid. Utilizing the designed VSC-HVDC systems, the feasible locations of resistive SFCLs were evaluated when DC line-to-line, DC line-to-ground and three phase AC faults were occurred. Consequently, it was found that the simulation model was effective to evaluate the positive effects of resistive SFCLs for the effective suppression of fault currents in VSC-HVDC systems as well as in integrated AC Systems. Finally, the optimum locations of SFCLs in VSC-HVDC transmission systems were suggested based on the simulation results

  15. Fault diagnosis methods for district heating substations

    Energy Technology Data Exchange (ETDEWEB)

    Pakanen, J.; Hyvaerinen, J.; Kuismin, J.; Ahonen, M. [VTT Building Technology, Espoo (Finland). Building Physics, Building Services and Fire Technology

    1996-12-31

    A district heating substation is a demanding process for fault diagnosis. The process is nonlinear, load conditions of the district heating network change unpredictably and standard instrumentation is designed only for control and local monitoring purposes, not for automated diagnosis. Extra instrumentation means additional cost, which is usually not acceptable to consumers. That is why all conventional methods are not applicable in this environment. The paper presents five different approaches to fault diagnosis. While developing the methods, various kinds of pragmatic aspects and robustness had to be considered in order to achieve practical solutions. The presented methods are: classification of faults using performance indexing, static and physical modelling of process equipment, energy balance of the process, interactive fault tree reasoning and statistical tests. The methods are applied to a control valve, a heat excharger, a mud separating device and the whole process. The developed methods are verified in practice using simulation, simulation or field tests. (orig.) (25 refs.)

  16. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  17. Fault Diagnosis of Demountable Disk-Drum Aero-Engine Rotor Using Customized Multiwavelet Method

    Directory of Open Access Journals (Sweden)

    Jinglong Chen

    2015-10-01

    Full Text Available The demountable disk-drum aero-engine rotor is an important piece of equipment that greatly impacts the safe operation of aircraft. However, assembly looseness or crack fault has led to several unscheduled breakdowns and serious accidents. Thus, condition monitoring and fault diagnosis technique are required for identifying abnormal conditions. Customized ensemble multiwavelet method for aero-engine rotor condition identification, using measured vibration data, is developed in this paper. First, customized multiwavelet basis function with strong adaptivity is constructed via symmetric multiwavelet lifting scheme. Then vibration signal is processed by customized ensemble multiwavelet transform. Next, normalized information entropy of multiwavelet decomposition coefficients is computed to directly reflect and evaluate the condition. The proposed approach is first applied to fault detection of an experimental aero-engine rotor. Finally, the proposed approach is used in an engineering application, where it successfully identified the crack fault of a demountable disk-drum aero-engine rotor. The results show that the proposed method possesses excellent performance in fault detection of aero-engine rotor. Moreover, the robustness of the multiwavelet method against noise is also tested and verified by simulation and field experiments.

  18. Active fault tolerant control of piecewise affine systems with reference tracking and input constraints

    DEFF Research Database (Denmark)

    Gholami, M.; Cocquempot, V.; Schiøler, H.

    2014-01-01

    An active fault tolerant control (AFTC) method is proposed for discrete-time piecewise affine (PWA) systems. Only actuator faults are considered. The AFTC framework contains a supervisory scheme, which selects a suitable controller in a set of controllers such that the stability and an acceptable...... performance of the faulty system are held. The design of the supervisory scheme is not considered here. The set of controllers is composed of a normal controller for the fault-free case, an active fault detection and isolation controller for isolation and identification of the faults, and a set of passive...... fault tolerant controllers (PFTCs) modules designed to be robust against a set of actuator faults. In this research, the piecewise nonlinear model is approximated by a PWA system. The PFTCs are state feedback laws. Each one is robust against a fixed set of actuator faults and is able to track...

  19. Kinematics of the quaternary fault zones in the Kyeongju area of the southeastern Korean Peninsula

    Energy Technology Data Exchange (ETDEWEB)

    Kim, In Seob; Lee, Byeong Hyui; Kwon, Hyeok Sang [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)] (and others)

    1998-09-15

    The purposes of this study are to interpret the kinematics of the Quaternary fault zones in the Kyeongju area, to determine deformation mechanisms during faulting by analyzing micorstrucutres of fault rocks from the fault zones, and to unravel the technic evaluation of the regional fault structures in the Kyeongju-Wolsung area. The scope of this study consists of ; collection and interpretation of structural elements through a detailed geologic investigation on the Quaternary faults in the Kyeongju-Wolsung area, interpretation of fault-rock microstructures from the fault zones using oriented samples of faults rocks, determination of deformation processes and mechanisms of the fault rocks and, interpretation of faulting kinematics and evaluation of the fault zones.

  20. Kinematics of the quaternary fault zones in the Kyeongju area of the southeastern Korean Peninsula

    International Nuclear Information System (INIS)

    Kim, In Seob; Lee, Byeong Hyui; Kwon, Hyeok Sang

    1998-09-01

    The purposes of this study are to interpret the kinematics of the Quaternary fault zones in the Kyeongju area, to determine deformation mechanisms during faulting by analyzing micorstrucutres of fault rocks from the fault zones, and to unravel the technic evaluation of the regional fault structures in the Kyeongju-Wolsung area. The scope of this study consists of ; collection and interpretation of structural elements through a detailed geologic investigation on the Quaternary faults in the Kyeongju-Wolsung area, interpretation of fault-rock microstructures from the fault zones using oriented samples of faults rocks, determination of deformation processes and mechanisms of the fault rocks and, interpretation of faulting kinematics and evaluation of the fault zones

  1. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  2. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    precut models with isotropic models to evaluate the trends of variability. Our results indicate that the discontinuities are reactivated especially when the tip of the newly-formed fault is either below or connected to them. During the stage of maximum activity along the precut, the faults slow down or even stop their propagation. The fault propagation systematically resumes when the angle between the fault and the precut is about 90° (critical angle); only during this stage the fault crosses the precut. The reactivation of the discontinuities induces an increase of the apical angle of the fault-related fold and produces wider limbs compared to the isotropic reference experiments.

  3. Fault Management Architectures and the Challenges of Providing Software Assurance

    Science.gov (United States)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the

  4. Iowa Bedrock Faults

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This fault coverage locates and identifies all currently known/interpreted fault zones in Iowa, that demonstrate offset of geologic units in exposure or subsurface...

  5. Layered Fault Management Architecture

    National Research Council Canada - National Science Library

    Sztipanovits, Janos

    2004-01-01

    ... UAVs or Organic Air Vehicles. The approach of this effort was to analyze fault management requirements of formation flight for fleets of UAVs, and develop a layered fault management architecture which demonstrates significant...

  6. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  7. A methodological combined framework for roadmapping biosensor research: a fault tree analysis approach within a strategic technology evaluation frame.

    Science.gov (United States)

    Siontorou, Christina G; Batzias, Fragiskos A

    2014-03-01

    Biosensor technology began in the 1960s to revolutionize instrumentation and measurement. Despite the glucose sensor market success that revolutionized medical diagnostics, and artificial pancreas promise currently the approval stage, the industry is reluctant to capitalize on other relevant university-produced knowledge and innovation. On the other hand, the scientific literature is extensive and persisting, while the number of university-hosted biosensor groups is growing. Considering the limited marketability of biosensors compared to the available research output, the biosensor field has been used by the present authors as a suitable paradigm for developing a methodological combined framework for "roadmapping" university research output in this discipline. This framework adopts the basic principles of the Analytic Hierarchy Process (AHP), replacing the lower level of technology alternatives with internal barriers (drawbacks, limitations, disadvantages), modeled through fault tree analysis (FTA) relying on fuzzy reasoning to count for uncertainty. The proposed methodology is validated retrospectively using ion selective field effect transistor (ISFET) - based biosensors as a case example, and then implemented prospectively membrane biosensors, putting an emphasis on the manufacturability issues. The analysis performed the trajectory of membrane platforms differently than the available market roadmaps that, considering the vast industrial experience in tailoring and handling crystallic forms, suggest the technology path of biomimetic and synthetic materials. The results presented herein indicate that future trajectories lie along with nanotechnology, and especially nanofabrication and nano-bioinformatics, and focused, more on the science-path, that is, on controlling the natural process of self-assembly and the thermodynamics of bioelement-lipid interaction. This retained the nature-derived sensitivity of the biosensor platform, pointing out the differences

  8. RESULTS, RESPONSIBILITY, FAULT AND CONTROL

    Directory of Open Access Journals (Sweden)

    Evgeniy Stoyanov

    2016-09-01

    Full Text Available The paper focuses on the responsibility arising from the registered financial results. The analysis of this responsibility presupposes its evaluation and determination of the role of fault in the formation of negative results. The search for efficiency in this whole process is justified by the understanding of the mechanisms that regulate the behavior of economic actors.

  9. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  10. Performance based fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2002-01-01

    Different aspects of fault detection and fault isolation in closed-loop systems are considered. It is shown that using the standard setup known from feedback control, it is possible to formulate fault diagnosis problems based on a performance index in this general standard setup. It is also shown...

  11. Effects of Fault Displacement on Emplacement Drifts

    International Nuclear Information System (INIS)

    Duan, F.

    2000-01-01

    The purpose of this analysis is to evaluate potential effects of fault displacement on emplacement drifts, including drip shields and waste packages emplaced in emplacement drifts. The output from this analysis not only provides data for the evaluation of long-term drift stability but also supports the Engineered Barrier System (EBS) process model report (PMR) and Disruptive Events Report currently under development. The primary scope of this analysis includes (1) examining fault displacement effects in terms of induced stresses and displacements in the rock mass surrounding an emplacement drift and (2 ) predicting fault displacement effects on the drip shield and waste package. The magnitude of the fault displacement analyzed in this analysis bounds the mean fault displacement corresponding to an annual frequency of exceedance of 10 -5 adopted for the preclosure period of the repository and also supports the postclosure performance assessment. This analysis is performed following the development plan prepared for analyzing effects of fault displacement on emplacement drifts (CRWMS M and O 2000). The analysis will begin with the identification and preparation of requirements, criteria, and inputs. A literature survey on accommodating fault displacements encountered in underground structures such as buried oil and gas pipelines will be conducted. For a given fault displacement, the least favorable scenario in term of the spatial relation of a fault to an emplacement drift is chosen, and the analysis is then performed analytically. Based on the analysis results, conclusions are made regarding the effects and consequences of fault displacement on emplacement drifts. Specifically, the analysis will discuss loads which can be induced by fault displacement on emplacement drifts, drip shield and/or waste packages during the time period of postclosure

  12. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) Version 5.0. Fault tree, event tree, and piping ampersand instrumentation diagram (FEP) editors reference manual: Volume 7

    International Nuclear Information System (INIS)

    McKay, M.K.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Fault Tree, Event Tree, and Piping and Instrumentation Diagram (FEP) editors allow the user to graphically build and edit fault trees, and event trees, and piping and instrumentation diagrams (P and IDs). The software is designed to enable the independent use of the graphical-based editors found in the Integrated Reliability and Risk Assessment System (IRRAS). FEP is comprised of three separate editors (Fault Tree, Event Tree, and Piping and Instrumentation Diagram) and a utility module. This reference manual provides a screen-by-screen guide of the entire FEP System

  13. Fuzzy delay model based fault simulator for crosstalk delay fault test ...

    Indian Academy of Sciences (India)

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design .... To find the quality of non-robust tests, a fuzzy delay ..... Dubois D and Prade H 1989 Processing Fuzzy temporal knowledge. IEEE Transactions ...

  14. Analytical Evaluation of Preliminary Drop Tests Performed to Develop a Robust Design for the Standardized DOE Spent Nuclear Fuel Canister

    International Nuclear Information System (INIS)

    Ware, A.G.; Morton, D.K.; Smith, N.L.; Snow, S.D.; Rahl, T.E.

    1999-01-01

    The Department of Energy (DOE) has developed a design concept for a set of standard canisters for the handling, interim storage, transportation, and disposal in the national repository, of DOE spent nuclear fuel (SNF). The standardized DOE SNF canister has to be capable of handling virtually all of the DOE SNF in a variety of potential storage and transportation systems. It must also be acceptable to the repository, based on current and anticipated future requirements. This expected usage mandates a robust design. The canister design has four unique geometries, with lengths of approximately 10 feet or 15 feet, and an outside nominal diameter of 18 inches or 24 inches. The canister has been developed to withstand a drop from 30 feet onto a rigid (flat) surface, sustaining only minor damage - but no rupture - to the pressure (containment) boundary. The majority of the end drop-induced damage is confined to the skirt and lifting/stiffening ring components, which can be removed if de sired after an accidental drop. A canister, with its skirt and stiffening ring removed after an accidental drop, can continue to be used in service with appropriate operational steps being taken. Features of the design concept have been proven through drop testing and finite element analyses of smaller test specimens. Finite element analyses also validated the canister design for drops onto a rigid (flat) surface for a variety of canister orientations at impact, from vertical to 45 degrees off vertical. Actual 30-foot drop testing has also been performed to verify the final design, though limited to just two full-scale test canister drops. In each case, the analytical models accurately predicted the canister response

  15. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    Science.gov (United States)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal

  16. Three dimensional investigation of oceanic active faults. A demonstration survey

    Energy Technology Data Exchange (ETDEWEB)

    Nakao, Seizo; Kishimoto, Kiyoyuki; Kuramoto, Shinichi; Sato, Mikio [Geological Survey of Japan, Tsukuba, Ibaraki (Japan)

    1998-02-01

    In order to upgrade probability of activity and action potential evaluation of oceanic active faults which have some important effects on nuclear facilities, trench type oceanic active fault was investigated three dimensionally. Contents of the investigation were high precision sea bottom topographic survey and sea bottom back scattering wave image data observation by using a sea bottom topography acoustic imaginator. And, by high resolution earthquake wave survey, high precision survey of an active fault under sea bottom was conducted to detect oceanic active faults three-dimensionally. Furthermore, the generally issued data were summarized to promote to construct a data base for evaluating the active faults. (G.K.)

  17. Three dimensional investigation of oceanic active faults. A demonstration survey

    International Nuclear Information System (INIS)

    Nakao, Seizo; Kishimoto, Kiyoyuki; Kuramoto, Shinichi; Sato, Mikio

    1998-01-01

    In order to upgrade probability of activity and action potential evaluation of oceanic active faults which have some important effects on nuclear facilities, trench type oceanic active fault was investigated three dimensionally. Contents of the investigation were high precision sea bottom topographic survey and sea bottom back scattering wave image data observation by using a sea bottom topography acoustic imaginator. And, by high resolution earthquake wave survey, high precision survey of an active fault under sea bottom was conducted to detect oceanic active faults three-dimensionally. Furthermore, the generally issued data were summarized to promote to construct a data base for evaluating the active faults. (G.K.)

  18. FUZZY FAULT DETECTION FOR PERMANENT MAGNET SYNCHRONOUS GENERATOR

    Directory of Open Access Journals (Sweden)

    N. Selvaganesan

    2011-07-01

    Full Text Available Faults in engineering systems are difficult to avoid and may result in serious consequences. Effective fault detection and diagnosis can improve system reliability and avoid expensive maintenance. In this paper fuzzy system based fault detection scheme for permanent magnet synchronous generator is proposed. The sequence current components like positive and negative sequence currents are used as fault indicators and given as inputs to fuzzy fault detector. Also, the fuzzy inference system is created and rule base is evaluated, relating the sequence current component to the type of faults. These rules are fired for specific changes in sequence current component and the faults are detected. The feasibility of the proposed scheme for permanent magnet synchronous generator is demonstrated for different types of fault under various operating conditions using MATLAB/Simulink.

  19. Optimization and validation of an existing, surgical and robust dry eye rat model for the evaluation of therapeutic compounds.

    Science.gov (United States)

    Joossen, Cedric; Lanckacker, Ellen; Zakaria, Nadia; Koppen, Carina; Joossens, Jurgen; Cools, Nathalie; De Meester, Ingrid; Lambeir, Anne-Marie; Delputte, Peter; Maes, Louis; Cos, Paul

    2016-05-01

    The aim of this research was to optimize and validate an animal model for dry eye, adopting clinically relevant evaluation parameters. Dry eye was induced in female Wistar rats by surgical removal of the exorbital lacrimal gland. The clinical manifestations of dry eye were evaluated by tear volume measurements, corneal fluorescein staining, cytokine measurements in tear fluid, MMP-9 mRNA expression and CD3(+) cell infiltration in the conjunctiva. The animal model was validated by treatment with Restasis(®) (4 weeks) and commercial dexamethasone eye drops (2 weeks). Removal of the exorbital lacrimal gland resulted in 50% decrease in tear volume and a gradual increase in corneal fluorescein staining. Elevated levels of TNF-α and IL-1α have been registered in tear fluid together with an increase in CD3(+) cells in the palpebral conjunctiva when compared to control animals. Additionally, an increase in MMP-9 mRNA expression was recorded in conjunctival tissue. Reference treatment with Restasis(®) and dexamethasone eye drops had a positive effect on all evaluation parameters, except on tear volume. This rat dry eye model was validated extensively and judged appropriate for the evaluation of novel compounds and therapeutic preparations for dry eye disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    Science.gov (United States)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  1. Distribution and nature of fault architecture in a layered sandstone and shale sequence: An example from the Moab fault, Utah

    Science.gov (United States)

    Davatzes, N.C.; Aydin, A.

    2005-01-01

    We examined the distribution of fault rock and damage zone structures in sandstone and shale along the Moab fault, a basin-scale normal fault with nearly 1 km (0.62 mi) of throw, in southeast Utah. We find that fault rock and damage zone structures vary along strike and dip. Variations are related to changes in fault geometry, faulted slip, lithology, and the mechanism of faulting. In sandstone, we differentiated two structural assemblages: (1) deformation bands, zones of deformation bands, and polished slip surfaces and (2) joints, sheared joints, and breccia. These structural assemblages result from the deformation band-based mechanism and the joint-based mechanism, respectively. Along the Moab fault, where both types of structures are present, joint-based deformation is always younger. Where shale is juxtaposed against the fault, a third faulting mechanism, smearing of shale by ductile deformation and associated shale fault rocks, occurs. Based on the knowledge of these three mechanisms, we projected the distribution of their structural products in three dimensions along idealized fault surfaces and evaluated the potential effect on fluid and hydrocarbon flow. We contend that these mechanisms could be used to facilitate predictions of fault and damage zone structures and their permeability from limited data sets. Copyright ?? 2005 by The American Association of Petroleum Geologists.

  2. Modular representation and analysis of fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Olmos, J; Wolf, L [Massachusetts Inst. of Tech., Cambridge (USA). Dept. of Nuclear Engineering

    1978-08-01

    An analytical method to describe fault tree diagrams in terms of their modular compositions is developed. Fault tree structures are characterized by recursively relating the top tree event to all its basic component inputs through a set of equations defining each of the modulus for the fault tree. It is shown that such a modular description is an extremely valuable tool for making a quantitative analysis of fault trees. The modularization methodology has been implemented into the PL-MOD computer code, written in PL/1 language, which is capable of modularizing fault trees containing replicated components and replicated modular gates. PL-MOD in addition can handle mutually exclusive inputs and explicit higher order symmetric (k-out-of-n) gates. The step-by-step modularization of fault trees performed by PL-MOD is demonstrated and it is shown how this procedure is only made possible through an extensive use of the list processing tools available in PL/1. A number of nuclear reactor safety system fault trees were analyzed. PL-MOD performed the modularization and evaluation of the modular occurrence probabilities and Vesely-Fussell importance measures for these systems very efficiently. In particular its execution time for the modularization of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using MOCUS, a code considered to be fast by present standards.

  3. Dynamic modeling of gearbox faults: A review

    Science.gov (United States)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  4. Korea-Japan Joint Research on Development of Seismic Capacity Evaluation and Enhancement Technology Considering Near-Fault Effect

    Energy Technology Data Exchange (ETDEWEB)

    Choun, Young Sun; Choi, In Kil; Kim, Min Kyu [KAERI, Daejon (Korea, Republic of); Ohtori, Yasuki; Shiba, Yoshiaki; Nakajima, Masato [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    2005-12-15

    Several recent improved methods for the EGFM are introduced in order to avoid artificial holes seen in the synthetic acceleration spectrum. Furthermore evaluation of input ground motions at Wolsung NPP are performed by varying the source parameters that may control the high-frequency wave radiation and the deviation of the synthetic motions are revealed. The PSHA case studies for four NPP sites (Wolsung, Kori, Uljin, Younggwang) are performed. In the analysis, site-specific attenuation equations developed for Korean NPP sites are employed, and the seismic hazards for the target sites are evaluated in the case where the four kind of seismic source models are considered. Moreover, the PSHA for Wolsung and Younggwang are conducted by using the site-specific attenuation equation with the index of response spectra and the uniform hazard spectra are evaluated for the two sites. The supporting tool for seismic response analysis and the evaluation tool for evaluating annual probability of failure were integrated in the frame of the seismic risk assessment system. Then, the tools were applied to the seismic risk assessment of the conventional EDG and isolated EDG. General information such as earthquake parameters and regional distribution of seismic intensity is summarized on the 2005 West Off Fukuoka earthquake. Then, the observed strong motion records in Japan and Korea sites are compiled, and regional distribution of peak accelerations are represented. Moreover, the peak accelerations of the records are compared with the values estimated from the existing attenuation equations.

  5. Korea-Japan Joint Research on Development of Seismic Capacity Evaluation and Enhancement Technology Considering Near-Fault Effect (Final Report)

    Energy Technology Data Exchange (ETDEWEB)

    Choun, Young Sun; Choi, In Kil; Kim, Min Kyu [KAERI, Daejeon (Korea, Republic of); Ohtori, Yasuki; Shiba, Yoshiaki; Nakajima, Masato [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    2006-12-15

    We compiled the results of the source analysis obtained under the collaboration research. Recent construction scheme for source modeling adopted in Japan is described, and strong-motion prediction is performed assuming the scenario earthquakes occurring in the Ulsan fault system, Korea. Finally Qs values beneath the Korean inland crust are estimated using strong-motion records in Korea from the 2005 Off West Fukuoka earthquake (M7.0). Probabilistic seismic hazard for four NPP sites in Korea are evaluated, in which the site specific attenuation equations with Index SA developed for NPP sites are adopted. Furthermore, the uniform hazard spectra for the four NPP sites in Korea are obtained by conducting the PSHA by using the attenuation equations with the index of response spectra and seismic source model cases with maximum weights. The supporting tools for seismic response analysis, the evaluation tool for evaluating annual probability of failure, and system analysis program were developed for the collaboration. The tools were verified with theoretical results, the results written in the reference document of EQESRA, and so forth. The system analysis program was applied for the investigation of the effect of improving the seismic capacity of equipment. We evaluated the annual probability of failure of isolated and non-isolated EDG at Younggwang NPP site as the results of the collaboration. The input ground motion for generating the seismic fragility curve was determined based on the seismic hazard analysis. It was found that the annual probability of failure of isolated EDG is lower than that of non-isolated EDG.

  6. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  7. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    Science.gov (United States)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  8. Robustness Analysis of Timber Truss Structure

    DEFF Research Database (Denmark)

    Rajčić, Vlatka; Čizmar, Dean; Kirkegaard, Poul Henning

    2010-01-01

    The present paper discusses robustness of structures in general and the robustness requirements given in the codes. Robustness of timber structures is also an issues as this is closely related to Working group 3 (Robustness of systems) of the COST E55 project. Finally, an example of a robustness...... evaluation of a widespan timber truss structure is presented. This structure was built few years ago near Zagreb and has a span of 45m. Reliability analysis of the main members and the system is conducted and based on this a robustness analysis is preformed....

  9. Evaluation of a finite-element reciprocity method for epileptic EEG source localization: Accuracy, computational complexity and noise robustness

    DEFF Research Database (Denmark)

    Shirvany, Yazdan; Rubæk, Tonny; Edelvik, Fredrik

    2013-01-01

    The aim of this paper is to evaluate the performance of an EEG source localization method that combines a finite element method (FEM) and the reciprocity theorem.The reciprocity method is applied to solve the forward problem in a four-layer spherical head model for a large number of test dipoles...... noise and electrode misplacement.The results show approximately 3% relative error between numerically calculated potentials done by the reciprocity theorem and the analytical solutions. When adding EEG noise with SNR between 5 and 10, the mean localization error is approximately 4.3 mm. For the case...... with 10 mm electrode misplacement the localization error is 4.8 mm. The reciprocity EEG source localization speeds up the solution of the inverse problem with more than three orders of magnitude compared to the state-of-the-art methods.The reciprocity method has high accuracy for modeling the dipole...

  10. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  11. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip.

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  12. Robust and Fault Tolerant Control of CD-players

    DEFF Research Database (Denmark)

    Vidal, Enrique Sanchez

    Several new standards have emerged recently in the area of portable optical data sto-rage media and more are on their way. In addition to the well known Compact Disc(CD), portable optical media now also feature media for video storage (DVDs) and ge-neral data storage media for computer purposes (CD...

  13. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    Science.gov (United States)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

  14. Evaluation for characteristics of around the Nojima fault; Butsuri tansa ni yoru `Nojima jishin danso` shuhen no jiban bussei ni kansuru ichihyoka

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, K; Tsuji, T [Newjec Inc., Osaka (Japan); Tsuji, M [OYO Corp., Tokyo (Japan)

    1996-05-01

    Various surveys were conducted for the area around the Nojima fault, including ground surface, two-dimensional electrical and boring surveys and elastic wave tomography, in order to grasp properties of the ground around the `Nojima earthquake fault.` The resistivity image method as one of the two-dimensional electrical methods was used to grasp fault shapes over a wide range of the 1.6km long section extending between Esaki and Hirabayashi. The courses of traverse were set in the direction almost perpendicular to the fault. Boreholes were excavated and elastic wave tomography was conducted between the boreholes on the 9th and 17th courses of traverse, to confirm ground/mountain conditions and to compare the results with observed elastic wave velocities. Very low resistivities are observed at places where granite is distributed, suggesting that the fault-induced changes are not limited to the area around the fault. The zone in which elastic wave velocity decreases is narrow, 10m at the longest, at a velocity of 2.4km/s or lower, which is a low velocity for that propagating in granite. 5 refs., 4 figs.

  15. Fault location in underground cables using ANFIS nets and discrete wavelet transform

    Directory of Open Access Journals (Sweden)

    Shimaa Barakat

    2014-12-01

    Full Text Available This paper presents an accurate algorithm for locating faults in a medium voltage underground power cable using a combination of Adaptive Network-Based Fuzzy Inference System (ANFIS and discrete wavelet transform (DWT. The proposed method uses five ANFIS networks and consists of 2 stages, including fault type classification and exact fault location. In the first part, an ANFIS is used to determine the fault type, applying four inputs, i.e., the maximum detailed energy of three phase and zero sequence currents. Other four ANFIS networks are utilized to pinpoint the faults (one for each fault type. Four inputs, i.e., the maximum detailed energy of three phase and zero sequence currents, are used to train the neuro-fuzzy inference systems in order to accurately locate the faults on the cable. The proposed method is evaluated under different fault conditions such as different fault locations, different fault inception angles and different fault resistances.

  16. Applying Parametric Fault Detection to a Mechanical System

    DEFF Research Database (Denmark)

    Felício, P.; Stoustrup, Jakob; Niemann, H.

    2002-01-01

    A way of doing parametric fault detection is described. It is based on the representation of parameter changes as linear fractional transformations (lfts). We describe a model with parametric uncertainty. Then a stabilizing controller is chosen and its robustness properties are studied via mu. Th....... The parameter changes (faults) are estimated based on estimates of the fictitious signals that enter the delta block in the lft. These signal estimators are designed by H-infinity techniques. The chosen example is an inverted pendulum....

  17. Quantitative Diagnosis of Rotor Vibration Fault Using Process Power Spectrum Entropy and Support Vector Machine Method

    Directory of Open Access Journals (Sweden)

    Cheng-Wei Fei

    2014-01-01

    Full Text Available To improve the diagnosis capacity of rotor vibration fault in stochastic process, an effective fault diagnosis method (named Process Power Spectrum Entropy (PPSE and Support Vector Machine (SVM (PPSE-SVM, for short method was proposed. The fault diagnosis model of PPSE-SVM was established by fusing PPSE method and SVM theory. Based on the simulation experiment of rotor vibration fault, process data for four typical vibration faults (rotor imbalance, shaft misalignment, rotor-stator rubbing, and pedestal looseness were collected under multipoint (multiple channels and multispeed. By using PPSE method, the PPSE values of these data were extracted as fault feature vectors to establish the SVM model of rotor vibration fault diagnosis. From rotor vibration fault diagnosis, the results demonstrate that the proposed method possesses high precision, good learning ability, good generalization ability, and strong fault-tolerant ability (robustness in four aspects of distinguishing fault types, fault severity, fault location, and noise immunity of rotor stochastic vibration. This paper presents a novel method (PPSE-SVM for rotor vibration fault diagnosis and real-time vibration monitoring. The presented effort is promising to improve the fault diagnosis precision of rotating machinery like gas turbine.

  18. Fault Tolerant Control of Wind Turbines

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Kinnaert, Michel

    2013-01-01

    This paper presents a test benchmark model for the evaluation of fault detection and accommodation schemes. This benchmark model deals with the wind turbine on a system level, and it includes sensor, actuator, and system faults, namely faults in the pitch system, the drive train, the generator......, and the converter system. Since it is a system-level model, converter and pitch system models are simplified because these are controlled by internal controllers working at higher frequencies than the system model. The model represents a three-bladed pitch-controlled variable-speed wind turbine with a nominal power...

  19. A study on quantification of unavailability of DPPS with fault tolerant techniques considering fault tolerant techniques' characteristics

    International Nuclear Information System (INIS)

    Kim, B. G.; Kang, H. G.; Kim, H. E.; Seung, P. H.; Kang, H. G.; Lee, S. J.

    2012-01-01

    With the improvement of digital technologies, digital I and C systems have included more various fault tolerant techniques than conventional analog I and C systems have, in order to increase fault detection and to help the system safely perform the required functions in spite of the presence of faults. So, in the reliability evaluation of digital systems, the fault tolerant techniques (FTTs) and their fault coverage must be considered. To consider the effects of FTTs in a digital system, there have been several studies on the reliability of digital model. Therefore, this research based on literature survey attempts to develop a model to evaluate the plant reliability of the digital plant protection system (DPPS) with fault tolerant techniques considering detection and process characteristics and human errors. Sensitivity analysis is performed to ascertain important variables from the fault management coverage and unavailability based on the proposed model

  20. Robust FDI for a Class of Nonlinear Networked Systems with ROQs

    Directory of Open Access Journals (Sweden)

    An-quan Sun

    2014-01-01

    Full Text Available This paper considers the robust fault detection and isolation (FDI problem for a class of nonlinear networked systems (NSs with randomly occurring quantisations (ROQs. After vector augmentation, Lyapunov function is introduced to ensure the asymptotically mean-square stability of fault detection system. By transforming the quantisation effects into sector-bounded parameter uncertainties, sufficient conditions ensuring the existence of fault detection filter are proposed, which can reduce the difference between output residuals and fault signals as small as possible under H∞ framework. Finally, an example linearized from a vehicle system is introduced to show the efficiency of the proposed fault detection filter.

  1. Fault diagnosis in spur gears based on genetic algorithm and random forest

    Science.gov (United States)

    Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan

    2016-03-01

    There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.

  2. Methodology for selection of attributes and operating conditions for SVM-Based fault locator's

    Directory of Open Access Journals (Sweden)

    Debbie Johan Arredondo Arteaga

    2017-01-01

    Full Text Available Context: Energy distribution companies must employ strategies to meet their timely and high quality service, and fault-locating techniques represent and agile alternative for restoring the electric service in the power distribution due to the size of distribution services (generally large and the usual interruptions in the service. However, these techniques are not robust enough and present some limitations in both computational cost and the mathematical description of the models they use. Method: This paper performs an analysis based on a Support Vector Machine for the evaluation of the proper conditions to adjust and validate a fault locator for distribution systems; so that it is possible to determine the minimum number of operating conditions that allow to achieve a good performance with a low computational effort. Results: We tested the proposed methodology in a prototypical distribution circuit, located in a rural area of Colombia. This circuit has a voltage of 34.5 KV and is subdivided in 20 zones. Additionally, the characteristics of the circuit allowed us to obtain a database of 630.000 records of single-phase faults and different operating conditions. As a result, we could determine that the locator showed a performance above 98% with 200 suitable selected operating conditions. Conclusions: It is possible to improve the performance of fault locators based on Support Vector Machine. Specifically, these improvements are achieved by properly selecting optimal operating conditions and attributes, since they directly affect the performance in terms of efficiency and the computational cost.

  3. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  4. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  5. Assessment on the influence of resistive superconducting fault current limiter in VSC-HVDC system

    Science.gov (United States)

    Lee, Jong-Geon; Khan, Umer Amir; Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Park, Byung-Bae; Lee, Bang-Wook

    2014-09-01

    Due to fewer risk of commutation failures, harmonic occurrences and reactive power consumptions, Voltage Source Converter (VSC) based HVDC system is known as the optimum solution of HVDC power system for the future power grid. However, the absence of suitable fault protection devices for HVDC system hinders the efficient VSC-HVDC power grid design. In order to enhance the reliability of the VSC-HVDC power grid against the fault current problems, the application of resistive Superconducting Fault Current Limiters (SFCLs) could be considered. Also, SFCLs could be applied to the VSC-HVDC system with integrated AC Power Systems in order to enhance the transient response and the robustness of the system. In this paper, in order to evaluate the role of SFCLs in VSC-HVDC systems and to determine the suitable position of SFCLs in VSC-HVDC power systems integrated with AC power System, a simulation model based on Korea Jeju-Haenam HVDC power system was designed in Matlab Simulink/SimPowerSystems. This designed model was composed of VSC-HVDC system connected with an AC microgrid. Utilizing the designed VSC-HVDC systems, the feasible locations of resistive SFCLs were evaluated when DC line-to-line, DC line-to-ground and three phase AC faults were occurred. Consequently, it was found that the simulation model was effective to evaluate the positive effects of resistive SFCLs for the effective suppression of fault currents in VSC-HVDC systems as well as in integrated AC Systems. Finally, the optimum locations of SFCLs in VSC-HVDC transmission systems were suggested based on the simulation results.

  6. Fault diagnosis for agitator driving system in a high temperature reduction reactor

    Energy Technology Data Exchange (ETDEWEB)

    Park, Gee Young; Hong, Dong Hee; Jung, Jae Hoo; Kim, Young Hwan; Jin, Jae Hyun; Yoon, Ji Sup [KAERI, Taejon (Korea, Republic of)

    2003-10-01

    In this paper, a preliminary study for development of a fault diagnosis is presented for monitoring and diagnosing faults in the agitator driving system of a high temperature reduction reactor. In order to identify a fault occurrence and classify the fault cause, vibration signals measured by accelerometers on the outer shroud of the agitator driving system are firstly decomposed by Wavelet Transform (WT) and the features corresponding to each fault type are extracted. For the diagnosis, the fuzzy ARTMAP is employed and thereby, based on the features extracted from the WT, the robust fault classifier can be implemented with a very short training time - a single training epoch and a single learning iteration is sufficient for training the fault classifier. The test results demonstrate satisfactory classification for the faults pre-categorized from considerations of possible occurrence during experiments on a small-scale reduction reactor.

  7. A New Method of Improving Transformer Restricted Earth Fault Protection

    Directory of Open Access Journals (Sweden)

    KRSTIVOJEVIC, J. P.

    2014-08-01

    Full Text Available A new method of avoiding malfunctioning of the transformer restricted earth fault (REF protection is presented. Application of the proposed method would eliminate unnecessary operation of REF protection in the cases of faults outside protected zone of a transformer or a magnetizing inrush accompanied by current transformer (CT saturation. On the basis of laboratory measurements and simulations the paper presents a detailed performance assessment of the proposed method which is based on digital phase comparator. The obtained results show that the new method was stable and precise for all tested faults and that its application would allow making a clear and precise difference between an internal fault and: (i external fault or (ii magnetizing inrush. The proposed method would improve performance of REF protection and reduce probability of maloperation due to CT saturation. The new method is robust and characterized by high speed of operation and high reliability and security.

  8. Fault-tolerant Control of a Cyber-physical System

    Science.gov (United States)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  9. Posbist fault tree analysis of coherent systems

    International Nuclear Information System (INIS)

    Huang, H.-Z.; Tong Xin; Zuo, Ming J.

    2004-01-01

    When the failure probability of a system is extremely small or necessary statistical data from the system is scarce, it is very difficult or impossible to evaluate its reliability and safety with conventional fault tree analysis (FTA) techniques. New techniques are needed to predict and diagnose such a system's failures and evaluate its reliability and safety. In this paper, we first provide a concise overview of FTA. Then, based on the posbist reliability theory, event failure behavior is characterized in the context of possibility measures and the structure function of the posbist fault tree of a coherent system is defined. In addition, we define the AND operator and the OR operator based on the minimal cut of a posbist fault tree. Finally, a model of posbist fault tree analysis (posbist FTA) of coherent systems is presented. The use of the model for quantitative analysis is demonstrated with a real-life safety system

  10. Bearing Fault Classification Based on Conditional Random Field

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2013-01-01

    Full Text Available Condition monitoring of rolling element bearing is paramount for predicting the lifetime and performing effective maintenance of the mechanical equipment. To overcome the drawbacks of the hidden Markov model (HMM and improve the diagnosis accuracy, conditional random field (CRF model based classifier is proposed. In this model, the feature vectors sequences and the fault categories are linked by an undirected graphical model in which their relationship is represented by a global conditional probability distribution. In comparison with the HMM, the main advantage of the CRF model is that it can depict the temporal dynamic information between the observation sequences and state sequences without assuming the independence of the input feature vectors. Therefore, the interrelationship between the adjacent observation vectors can also be depicted and integrated into the model, which makes the classifier more robust and accurate than the HMM. To evaluate the effectiveness of the proposed method, four kinds of bearing vibration signals which correspond to normal, inner race pit, outer race pit and roller pit respectively are collected from the test rig. And the CRF and HMM models are built respectively to perform fault classification by taking the sub band energy features of wavelet packet decomposition (WPD as the observation sequences. Moreover, K-fold cross validation method is adopted to improve the evaluation accuracy of the classifier. The analysis and comparison under different fold times show that the accuracy rate of classification using the CRF model is higher than the HMM. This method brings some new lights on the accurate classification of the bearing faults.

  11. Tolerance Towards Sensor Faults: An Application to a Flexible Arm Manipulator

    Directory of Open Access Journals (Sweden)

    Chee Pin Tan

    2006-12-01

    Full Text Available As more engineering operations become automatic, the need for robustness towards faults increases. Hence, a fault tolerant control (FTC scheme is a valuable asset. This paper presents a robust sensor fault FTC scheme implemented on a flexible arm manipulator, which has many applications in automation. Sensor faults affect the system's performance in the closed loop when the faulty sensor readings are used to generate the control input. In this paper, the non-faulty sensors are used to reconstruct the faults on the potentially faulty sensors. The reconstruction is subtracted from the faulty sensors to form a compensated ‘virtual sensor’ and this signal (instead of the normally used faulty sensor output is then used to generate the control input. A design method is also presented in which the FTC scheme is made insensitive to any system uncertainties. Two fault conditions are tested; total failure and incipient faults. Then the scheme robustness is tested by implementing the flexible joint's FTC scheme on a flexible link, which has different parameters. Excellent results have been obtained for both cases (joint and link; the FTC scheme caused the system performance is almost identical to the fault-free scenario, whilst providing an indication that a fault is present, even for simultaneous faults.

  12. Fault tree graphics

    International Nuclear Information System (INIS)

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  13. A novel approach for fault detection and classification of the thermocouple sensor in Nuclear Power Plant using Singular Value Decomposition and Symbolic Dynamic Filter

    International Nuclear Information System (INIS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-01-01

    Highlights: • A novel approach to classify the fault pattern using data-driven methods. • Application of robust reconstruction method (SVD) to identify the faulty sensor. • Analysing fault pattern for plenty of sensors using SDF with less time complexity. • An efficient data-driven model is designed to the false and missed alarms. - Abstract: A mathematical model with two layers is developed using data-driven methods for thermocouple sensor fault detection and classification in Nuclear Power Plants (NPP). The Singular Value Decomposition (SVD) based method is applied to detect the faulty sensor from a data set of all sensors, at the first layer. In the second layer, the Symbolic Dynamic Filter (SDF) is employed to classify the fault pattern. If SVD detects any false fault, it is also re-evaluated by the SDF, i.e., the model has two layers of checking to balance the false alarms. The proposed fault detection and classification method is compared with the Principal Component Analysis. Two case studies are taken from Fast Breeder Test Reactor (FBTR) to prove the efficiency of the proposed method.

  14. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  15. Three dimensional investigation of oceanic active faults. A demonstration survey

    Energy Technology Data Exchange (ETDEWEB)

    Nakao, Seizo; Kishimoto, Kiyoyuki; Ikehara, Ken; Kuramoto, Shinichi; Sato, Mikio [Geological Survey of Japan, Kawasaki, Kanagawa (Japan)

    1999-02-01

    Oceanic active faults were classified into trench and in-land types, and a bottom survey was conducted on an aim of estimation on activity of a trench type oceanic active faults. For both sides of an oceanic active fault found at high precision sonic investigations in 1996 fiscal year, it was attempted from a record remained in sediments how a fault changed by a fault motion and how long time it acted. And, construction of a data base for evaluation of the active faults was promoted by generalizing the issued publications. As a result, it was found that a method to estimate a fault activity using turbidite in success at shallow sea could not easily be received at deep sea, and that as sedimentation method in deep sea changed largely by topography and so on, the turbidite did not play always a rule of key layer. (G.K.)

  16. Characterization of leaky faults

    International Nuclear Information System (INIS)

    Shan, Chao.

    1990-05-01

    Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs

  17. Solar system fault detection

    Science.gov (United States)

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  18. Passion, Robustness and Perseverance

    DEFF Research Database (Denmark)

    Lim, Miguel Antonio; Lund, Rebecca

    2016-01-01

    Evaluation and merit in the measured university are increasingly based on taken-for-granted assumptions about the “ideal academic”. We suggest that the scholar now needs to show that she is passionate about her work and that she gains pleasure from pursuing her craft. We suggest that passion...... and pleasure achieve an exalted status as something compulsory. The scholar ought to feel passionate about her work and signal that she takes pleasure also in the difficult moments. Passion has become a signal of robustness and perseverance in a job market characterised by funding shortages, increased pressure...... way to demonstrate their potential and, crucially, their passion for their work. Drawing on the literature on technologies of governance, we reflect on what is captured and what is left out by these two evaluation instruments. We suggest that bibliometric analysis at the individual level is deeply...

  19. Adaptive Observer-Based Fault-Tolerant Control Design for Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Huaming Qian

    2015-01-01

    Full Text Available This study focuses on the design of the robust fault-tolerant control (FTC system based on adaptive observer for uncertain linear time invariant (LTI systems. In order to improve robustness, rapidity, and accuracy of traditional fault estimation algorithm, an adaptive fault estimation algorithm (AFEA using an augmented observer is presented. By utilizing a new fault estimator model, an improved AFEA based on linear matrix inequality (LMI technique is proposed to increase the performance. Furthermore, an observer-based state feedback fault-tolerant control strategy is designed, which guarantees the stability and performance of the faulty system. Moreover, the adaptive observer and the fault-tolerant controller are designed separately, whose performance can be considered, respectively. Finally, simulation results of an aircraft application are presented to illustrate the effectiveness of the proposed design methods.

  20. Influence of mineralogy and microstructures on strain localization and fault zone architecture of the Alpine Fault, New Zealand

    Science.gov (United States)

    Ichiba, T.; Kaneki, S.; Hirono, T.; Oohashi, K.; Schuck, B.; Janssen, C.; Schleicher, A.; Toy, V.; Dresen, G.

    2017-12-01

    The Alpine Fault on New Zealand's South Island is an oblique, dextral strike-slip fault that accommodated the majority of displacement between the Pacific and the Australian Plates and presents the biggest seismic hazard in the region. Along its central segment, the hanging wall comprises greenschist and amphibolite facies Alpine Schists. Exhumation from 35 km depth, along a SE-dipping detachment, lead to mylonitization which was subsequently overprinted by brittle deformation and finally resulted in the fault's 1 km wide damage zone. The geomechanical behavior of a fault is affected by the internal structure of its fault zone. Consequently, studying processes controlling fault zone architecture allows assessing the seismic hazard of a fault. Here we present the results of a combined microstructural (SEM and TEM), mineralogical (XRD) and geochemical (XRF) investigation of outcrop samples originating from several locations along the Alpine Fault, the aim of which is to evaluate the influence of mineralogical composition, alteration and pre-existing fabric on strain localization and to identify the controls on the fault zone architecture, particularly the locus of brittle deformation in P, T and t space. Field observations reveal that the fault's principal slip zone (PSZ) is either a thin (< 1 cm to < 7 cm) layered structure or a relatively thick (10s cm) package lacking a detectable macroscopic fabric. Lithological and related rheological contrasts are widely assumed to govern strain localization. However, our preliminary results suggest that qualitative mineralogical composition has only minor impact on fault zone architecture. Quantities of individual mineral phases differ markedly between fault damage zone and fault core at specific sites, but the quantitative composition of identical structural units such as the fault core, is similar in all samples. This indicates that the degree of strain localization at the Alpine Fault might be controlled by small initial

  1. What does fault tolerant Deep Learning need from MPI?

    Energy Technology Data Exchange (ETDEWEB)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.; Daily, Jeffrey A.

    2017-09-25

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for a fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.

  2. Geomorphological and geological property of short active fault in fore-arc region of Japan

    International Nuclear Information System (INIS)

    Sasaki, Toshinori; Inoue, Daiei; Ueta, Keiichi; Miyakoshi, Katsuyoshi

    2009-01-01

    The important issue in the earthquake magnitude evaluation method is the classification of short active faults or lineaments. It is necessary to determine the type of active fault to be included in the earthquake magnitude evaluation. The particular group of fault is the surface earthquake faults that are presumed to be branched faults of large interplate earthquakes in subduction zones. We have classified short lineaments in two fore-arc regions of Japan through geological and geomorphological methods based on field survey and aerial photograph interpretation. The first survey is conducted at Enmeiji Fault in Boso Peninsula. The fault is known to have been displaced by 1923 Taisho Kanto earthquake. The altitude distributions of marine terrace surfaces are different on both sides of the fault. In other words, this fault has been displaced repeatedly by the large interplate earthquakes in the past. However, the recurrent interval of this fault is far longer than the large interplate earthquake calculated by the slip rate and the displacement per event. The second survey is conducted in the western side of Muroto Peninsula, where several short lineaments are distributed. We have found several fault outcrops along the few, particular lineaments. The faults in the region have similar properties to Enmeiji Fault. On the other hand, short lineaments are found to be structural landforms. The comparison of the two groups enables us to classify the short lineaments based on the geomorphological property and geological cause of these faults. Displacement per event is far larger than displacement deduced from length of the active fault. Recurrence interval of the short active fault is far longer than that of large interplate earthquake. Displacement of the short active fault has cumulative. The earthquake magnitude of the faults have these characters need to be evaluated by the plate boundary fault or the long branched seismogenic fault. (author)

  3. Reset Tree-Based Optical Fault Detection

    Directory of Open Access Journals (Sweden)

    Howon Kim

    2013-05-01

    Full Text Available In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit’s reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool.

  4. Testing of high-impedance fault relays

    Energy Technology Data Exchange (ETDEWEB)

    Nagpal, M. [Powertech Labs., Inc., Surrey, BC (Canada)

    1995-11-01

    A test system and protocol was developed for the testing of high-impedance fault (HIF) detection devices. A technique was established for point-by-point addition of fault and load currents, the resultant was used for testing the performance of the devices in detecting HIFs in the presence of load current. The system used digitized data from recorded faults and normal currents to generate analog test signals for high-impedance fault detection relays. A test apparatus was built with a 10 kHz band-width and playback duration of 30 minutes on 6 output channels for testing purposes. Three devices which have recently become available were tested and their performance was evaluated based on their respective test results.

  5. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  6. Fault isolability conditions for linear systems with additive faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  7. Transposing an active fault database into a seismic hazard fault model for nuclear facilities. Pt. 1. Building a database of potentially active faults (BDFA) for metropolitan France

    Energy Technology Data Exchange (ETDEWEB)

    Jomard, Herve; Cushing, Edward Marc; Baize, Stephane; Chartier, Thomas [IRSN - Institute of Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); Palumbo, Luigi; David, Claire [Neodyme, Joue les Tours (France)

    2017-07-01

    The French Institute for Radiation Protection and Nuclear Safety (IRSN), with the support of the Ministry of Environment, compiled a database (BDFA) to define and characterize known potentially active faults of metropolitan France. The general structure of BDFA is presented in this paper. BDFA reports to date 136 faults and represents a first step toward the implementation of seismic source models that would be used for both deterministic and probabilistic seismic hazard calculations. A robustness index was introduced, highlighting that less than 15% of the database is controlled by reasonably complete data sets. An example of transposing BDFA into a fault source model for PSHA (probabilistic seismic hazard analysis) calculation is presented for the Upper Rhine Graben (eastern France) and exploited in the companion paper (Chartier et al., 2017, hereafter Part 2) in order to illustrate ongoing challenges for probabilistic fault-based seismic hazard calculations.

  8. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  9. Fault latency in the memory - An experimental study on VAX 11/780

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1986-01-01

    Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.

  10. Fault Analysis in Cryptography

    CERN Document Server

    Joye, Marc

    2012-01-01

    In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

  11. Robustizing Circuit Optimization using Huber Functions

    DEFF Research Database (Denmark)

    Bandler, John W.; Biernacki, Radek M.; Chen, Steve H.

    1993-01-01

    The authors introduce a novel approach to 'robustizing' microwave circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting poin......, a preliminary optimization by selecting a small number of dominant variables. It is demonstrated, through multiplexer optimization, that the one-sided Huber function can be more effective and efficient than minimax in overcoming a bad starting point.......The authors introduce a novel approach to 'robustizing' microwave circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting points......, and statistical uncertainties. They demonstrate FET statistical modeling, multiplexer optimization, analog fault location, and data fitting. They extend the Huber concept by introducing a 'one-sided' Huber function for large-scale optimization. For large-scale problems, the designer often attempts, by intuition...

  12. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  13. Parametric fault estimation based on H∞ optimization in a satellite launch vehicle

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Izadi-Zamanabadi, Roozbeh; Stoustrup, Jakob

    2008-01-01

    Correct diagnosis under harsh environmental conditions is crucial for space vehiclespsila health management systems to avoid possible hazardous situations. Consequently, the diagnosis methods are required to be robust toward these conditions. Design of a parametric fault detector, where the fault...... for the satellite launch vehicle and the results are discussed....

  14. High Order Sliding Mode Control of Doubly-fed Induction Generator under Unbalanced Grid Faults

    DEFF Research Database (Denmark)

    Zhu, Rongwu; Chen, Zhe; Wu, Xiaojie

    2013-01-01

    This paper deals with a doubly-fed induction generator-based (DFIG) wind turbine system under grid fault conditions such as: unbalanced grid voltage, three-phase grid fault, using a high order sliding mode control (SMC). A second order sliding mode controller, which is robust with respect...

  15. Breast MRI at 7 Tesla with a bilateral coil and T1-weighted acquisition with robust fat suppression: image evaluation and comparison with 3 Tesla

    International Nuclear Information System (INIS)

    Brown, Ryan; Storey, Pippa; McGorty, KellyAnne; Klautau Leite, Ana Paula; Babb, James; Sodickson, Daniel K.; Wiggins, Graham C.; Moy, Linda; Geppert, Christian

    2013-01-01

    To evaluate the image quality of T1-weighted fat-suppressed breast MRI at 7 T and to compare 7-T and 3-T images. Seventeen subjects were imaged using a 7-T bilateral transmit-receive coil and 3D gradient echo sequence with adiabatic inversion-based fat suppression (FS). Images were graded on a five-point scale and quantitatively assessed through signal-to-noise ratio (SNR), fibroglandular/fat contrast and signal uniformity measurements. Image scores at 7 and 3 T were similar on standard-resolution images (1.1 x 1.1 x 1.1-1.6 mm 3 ), indicating that high-quality breast imaging with clinical parameters can be performed at 7 T. The 7-T SNR advantage was underscored on 0.6-mm isotropic images, where image quality was significantly greater than at 3 T (4.2 versus 3.1, P ≤ 0.0001). Fibroglandular/fat contrast was more than two times higher at 7 T than at 3 T, owing to effective adiabatic inversion-based FS and the inherent 7-T signal advantage. Signal uniformity was comparable at 7 and 3 T (P < 0.05). Similar 7-T image quality was observed in all subjects, indicating robustness against anatomical variation. The 7-T bilateral transmit-receive coil and adiabatic inversion-based FS technique produce image quality that is as good as or better than at 3 T. (orig.)

  16. Breast MRI at 7 Tesla with a bilateral coil and T1-weighted acquisition with robust fat suppression: image evaluation and comparison with 3 Tesla

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Ryan; Storey, Pippa; McGorty, KellyAnne; Klautau Leite, Ana Paula; Babb, James; Sodickson, Daniel K.; Wiggins, Graham C.; Moy, Linda [New York University Langone Medical Center, Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York, NY (United States); Geppert, Christian [Siemens Medical Solutions USA Inc., New York, NY (United States)

    2013-11-15

    To evaluate the image quality of T1-weighted fat-suppressed breast MRI at 7 T and to compare 7-T and 3-T images. Seventeen subjects were imaged using a 7-T bilateral transmit-receive coil and 3D gradient echo sequence with adiabatic inversion-based fat suppression (FS). Images were graded on a five-point scale and quantitatively assessed through signal-to-noise ratio (SNR), fibroglandular/fat contrast and signal uniformity measurements. Image scores at 7 and 3 T were similar on standard-resolution images (1.1 x 1.1 x 1.1-1.6 mm{sup 3}), indicating that high-quality breast imaging with clinical parameters can be performed at 7 T. The 7-T SNR advantage was underscored on 0.6-mm isotropic images, where image quality was significantly greater than at 3 T (4.2 versus 3.1, P {<=} 0.0001). Fibroglandular/fat contrast was more than two times higher at 7 T than at 3 T, owing to effective adiabatic inversion-based FS and the inherent 7-T signal advantage. Signal uniformity was comparable at 7 and 3 T (P < 0.05). Similar 7-T image quality was observed in all subjects, indicating robustness against anatomical variation. The 7-T bilateral transmit-receive coil and adiabatic inversion-based FS technique produce image quality that is as good as or better than at 3 T. (orig.)

  17. Normalized STEAM-based diffusion tensor imaging provides a robust assessment of muscle tears in football players: preliminary results of a new approach to evaluate muscle injuries.

    Science.gov (United States)

    Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Karner, Manuela; Resinger, Christoph; Feiweier, Thorsten; Trattnig, Siegfried; Bogner, Wolfgang

    2018-02-08

    To assess acute muscle tears in professional football players by diffusion tensor imaging (DTI) and evaluate the impact of normalization of data. Eight football players with acute lower limb muscle tears were examined. DTI metrics of the injured muscle and corresponding healthy contralateral muscle and of ROIs drawn in muscle tears (ROI tear ) in the corresponding healthy contralateral muscle (ROI hc_t ) in a healthy area ipsilateral to the injury (ROI hi ) and in a corresponding contralateral area (ROI hc_i ) were compared. The same comparison was performed for ratios of the injured (ROI tear /ROI hi ) and contralateral sides (ROI hc_t /ROI hc_i ). ANOVA, Bonferroni-corrected post-hoc and Student's t-tests were used. Analyses of the entire muscle did not show any differences (p>0.05 each) except for axial diffusivity (AD; p=0.048). ROI tear showed higher mean diffusivity (MD) and AD than ROI hc_t (ptear than in ROI hi and ROI hc_t (ptear than in any other ROI (pmuscle tears in athletes especially after normalization to healthy muscle tissue. • STEAM-based DTI allows the investigation of muscle tears affecting professional football players. • Fractional anisotropy and mean diffusivity differ between injured and healthy muscle areas. • Only normalized data show differences of fibre tracking metrics in muscle tears. • The normalization of DTI-metrics enables a more robust characterization of muscle tears.

  18. Real-Time Continuous Response Spectra Exceedance Calculation Displayed in a Web-Browser Enables Rapid and Robust Damage Evaluation by First Responders

    Science.gov (United States)

    Franke, M.; Skolnik, D. A.; Harvey, D.; Lindquist, K.

    2014-12-01

    A novel and robust approach is presented that provides near real-time earthquake alarms for critical structures at distributed locations and large facilities using real-time estimation of response spectra obtained from near free-field motions. Influential studies dating back to the 1980s identified spectral response acceleration as a key ground motion characteristic that correlates well with observed damage in structures. Thus, monitoring and reporting on exceedance of spectra-based thresholds are useful tools for assessing the potential for damage to facilities or multi-structure campuses based on input ground motions only. With as little as one strong-motion station per site, this scalable approach can provide rapid alarms on the damage status of remote towns, critical infrastructure (e.g., hospitals, schools) and points of interests (e.g., bridges) for a very large number of locations enabling better rapid decision making during critical and difficult immediate post-earthquake response actions. Details on the novel approach are presented along with an example implementation for a large energy company. Real-time calculation of PSA exceedance and alarm dissemination are enabled with Bighorn, an extension module based on the Antelope software package that combines real-time spectral monitoring and alarm capabilities with a robust built-in web display server. Antelope is an environmental data collection software package from Boulder Real Time Technologies (BRTT) typically used for very large seismic networks and real-time seismic data analyses. The primary processing engine produces continuous time-dependent response spectra for incoming acceleration streams. It utilizes expanded floating-point data representations within object ring-buffer packets and waveform files in a relational database. This leads to a very fast method for computing response spectra for a large number of channels. A Python script evaluates these response spectra for exceedance of one or more

  19. Quaternary Fault Lines

    Data.gov (United States)

    Department of Homeland Security — This data set contains locations and information on faults and associated folds in the United States that are believed to be sources of M>6 earthquakes during the...

  20. Finite Time Fault Tolerant Control for Robot Manipulators Using Time Delay Estimation and Continuous Nonsingular Fast Terminal Sliding Mode Control.

    Science.gov (United States)

    Van, Mien; Ge, Shuzhi Sam; Ren, Hongliang

    2016-04-28

    In this paper, a novel finite time fault tolerant control (FTC) is proposed for uncertain robot manipulators with actuator faults. First, a finite time passive FTC (PFTC) based on a robust nonsingular fast terminal sliding mode control (NFTSMC) is investigated. Be analyzed for addressing the disadvantages of the PFTC, an AFTC are then investigated by combining NFTSMC with a simple fault diagnosis scheme. In this scheme, an online fault estimation algorithm based on time delay estimation (TDE) is proposed to approximate actuator faults. The estimated fault information is used to detect, isolate, and accommodate the effect of the faults in the system. Then, a robust AFTC law is established by combining the obtained fault information and a robust NFTSMC. Finally, a high-order sliding mode (HOSM) control based on super-twisting algorithm is employed to eliminate the chattering. In comparison to the PFTC and other state-of-the-art approaches, the proposed AFTC scheme possess several advantages such as high precision, strong robustness, no singularity, less chattering, and fast finite-time convergence due to the combined NFTSMC and HOSM control, and requires no prior knowledge of the fault due to TDE-based fault estimation. Finally, simulation results are obtained to verify the effectiveness of the proposed strategy.

  1. Detection and Identification of Loss of Efficiency Faults of Flight Actuators

    Directory of Open Access Journals (Sweden)

    Ossmann Daniel

    2015-03-01

    Full Text Available We propose linear parameter-varying (LPV model-based approaches to the synthesis of robust fault detection and diagnosis (FDD systems for loss of efficiency (LOE faults of flight actuators. The proposed methods are applicable to several types of parametric (or multiplicative LOE faults such as actuator disconnection, surface damage, actuator power loss or stall loads. For the detection of these parametric faults, advanced LPV-model detection techniques are proposed, which implicitly provide fault identification information. Fast detection of intermittent stall loads (seen as nuisances, rather than faults is important in enhancing the performance of various fault detection schemes dealing with large input signals. For this case, a dedicated fast identification algorithm is devised. The developed FDD systems are tested on a nonlinear actuator model which is implemented in a full nonlinear aircraft simulation model. This enables the validation of the FDD system’s detection and identification characteristics under realistic conditions.

  2. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    Science.gov (United States)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  3. Methods for Fault Diagnosability Analysis of a Class of Affine Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Xiafu Peng

    2015-01-01

    Full Text Available The fault diagnosability analysis for a given model, before developing a diagnosis algorithm, can be used to answer questions like “can the fault fi be detected by observed states?” and “can it separate fault fi from fault fj by observed states?” If not, we should redesign the sensor placement. This paper deals with the problem of the evaluation of detectability and separability for the diagnosability analysis of affine nonlinear system. First, we used differential geometry theory to analyze the nonlinear system and proposed new detectability criterion and separability criterion. Second, the related matrix between the faults and outputs of the system and the fault separable matrix are designed for quantitative fault diagnosability calculation and fault separability calculation, respectively. Finally, we illustrate our approach to exemplify how to analyze diagnosability by a certain nonlinear system example, and the experiment results indicate the effectiveness of the fault evaluation methods.

  4. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  5. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  6. Fault lubrication during earthquakes.

    Science.gov (United States)

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  7. Vipava fault (Slovenia

    Directory of Open Access Journals (Sweden)

    Ladislav Placer

    2008-06-01

    Full Text Available During mapping of the already accomplished Razdrto – Senožeče section of motorway and geologic surveying of construction operations of the trunk road between Razdrto and Vipava in northwestern part of External Dinarides on the southwestern slope of Mt. Nanos, called Rebrnice, a steep NW-SE striking fault was recognized, situated between the Predjama and the Ra{a faults. The fault was named Vipava fault after the Vipava town. An analysis of subrecent gravitational slips at Rebrnice indicates that they were probably associated with the activity of this fault. Unpublished results of a repeated levelling line along the regional road passing across the Vipava fault zone suggest its possible present activity. It would be meaningful to verify this by appropriate geodetic measurements, and to study the actual gravitational slips at Rebrnice. The association between tectonics and gravitational slips in this and in similar extreme cases in the areas of Alps and Dinarides points at the need of complex studying of geologic proceses.

  8. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  9. Fault zone structure and kinematics from lidar, radar, and imagery: revealing new details along the creeping San Andreas Fault

    Science.gov (United States)

    DeLong, S.; Donnellan, A.; Pickering, A.

    2017-12-01

    Aseismic fault creep, coseismic fault displacement, distributed deformation, and the relative contribution of each have important bearing on infrastructure resilience, risk reduction, and the study of earthquake physics. Furthermore, the impact of interseismic fault creep in rupture propagation scenarios, and its impact and consequently on fault segmentation and maximum earthquake magnitudes, is poorly resolved in current rupture forecast models. The creeping section of the San Andreas Fault (SAF) in Central California is an outstanding area for establishing methodology for future scientific response to damaging earthquakes and for characterizing the fine details of crustal deformation. Here, we describe how data from airborne and terrestrial laser scanning, airborne interferometric radar (UAVSAR), and optical data from satellites and UAVs can be used to characterize rates and map patterns of deformation within fault zones of varying complexity and geomorphic expression. We are evaluating laser point cloud processing, photogrammetric structure from motion, radar interferometry, sub-pixel correlation, and other techniques to characterize the relative ability of each to measure crustal deformation in two and three dimensions through time. We are collecting new and synthesizing existing data from the zone of highest interseismic creep rates along the SAF where a transition from a single main fault trace to a 1-km wide extensional stepover occurs. In the stepover region, creep measurements from alignment arrays 100 meters long across the main fault trace reveal lower rates than those in adjacent, geomorphically simpler parts of the fault. This indicates that deformation is distributed across the en echelon subsidiary faults, by creep and/or stick-slip behavior. Our objectives are to better understand how deformation is partitioned across a fault damage zone, how it is accommodated in the shallow subsurface, and to better characterize the relative amounts of fault creep

  10. Fault morphology of the lyo Fault, the Median Tectonic Line Active Fault System

    OpenAIRE

    後藤, 秀昭

    1996-01-01

    In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ~ ...

  11. Effect Analysis of Faults in Digital I and C Systems of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Jun; Jung, Won Dea [KAERI, Dajeon (Korea, Republic of); Kim, Man Cheol [Chung-Ang University, Seoul (Korea, Republic of)

    2014-08-15

    A reliability analysis of digital instrumentation and control (I and C) systems in nuclear power plants has been introduced as one of the important elements of a probabilistic safety assessment because of the unique characteristics of digital I and C systems. Digital I and C systems have various features distinguishable from those of analog I and C systems such as software and fault-tolerant techniques. In this work, the faults in a digital I and C system were analyzed and a model for representing the effects of the faults was developed. First, the effects of the faults in a system were analyzed using fault injection experiments. A software-implemented fault injection technique in which faults can be injected into the memory was used based on the assumption that all faults in a system are reflected in the faults in the memory. In the experiments, the effect of a fault on the system output was observed. In addition, the success or failure in detecting the fault by fault-tolerant functions included in the system was identified. Second, a fault tree model for representing that a fault is propagated to the system output was developed. With the model, it can be identified how a fault is propagated to the output or why a fault is not detected by fault-tolerant techniques. Based on the analysis results of the proposed method, it is possible to not only evaluate the system reliability but also identify weak points of fault-tolerant techniques by identifying undetected faults. The results can be reflected in the designs to improve the capability of fault-tolerant techniques.

  12. Effect analysis of faults in digital I and C systems of nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Jun

    2014-01-01

    A reliability analysis of digital instrumentation and control (I and C) systems in nuclear power plants has been introduced as one of the important elements of a probabilistic safety assessment because of the unique characteristics of digital I and C systems. Digital I and C systems have various features distinguishable from those of analog I and C systems such as software and fault-tolerant techniques. In this work, the faults in a digital I and C system were analyzed and a model for representing the effects of the faults was developed. First, the effects of the faults in a system were analyzed using fault injection experiments. A software-implemented fault injection technique in which faults can be injected into the memory was used based on the assumption that all faults in a system are reflected in the faults in the memory. In the experiments, the effect of a fault on the system output was observed. In addition, the success or failure in detecting the fault by fault-tolerant functions included in the system was identified. Second, a fault tree model for representing that a fault is propagated to the system output was developed. With the model, it can be identified how a fault is propagated to the output or why a fault is not detected by fault-tolerant techniques. Based on the analysis results of the proposed method, it is possible to not only evaluate the system reliability but also identify weak points of fault-tolerant techniques by identifying undetected faults. The results can be reflected in the designs to improve the capability of fault-tolerant techniques. (author)

  13. Reverse engineering of inductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Pina, J M; Neves, M Ventim; Rodrigues, A L [Centre of Technology and Systems Faculdade de Ciencias e Tecnologia, Nova University of Lisbon Monte de Caparica, 2829-516 Caparica (Portugal); Suarez, P; Alvarez, A, E-mail: jmmp@fct.unl.p [' Benito Mahedero' Group of Electrical Applications of Superconductors Escuela de IngenierIas Industrials, University of Extremadura Avenida de Elvas s/n, 06006 Badajoz (Spain)

    2010-06-01

    The inductive fault current limiter is less compact and harder to scale to high voltage networks than the resistive one. Nevertheless, its simple construction and mechanical robustness make it attractive in low voltage grids. Thus, it might be an enabling technology for the advent of microgrids, low voltage networks with dispersed generation, controllable loads and energy storage. A new methodology for reverse engineering of inductive fault current limiters based on the independent analysis of iron cores and HTS cylinders is presented in this paper. Their electromagnetic characteristics are used to predict the devices' hysteresis loops and consequently their dynamic behavior. Previous models based on the separate analysis of the limiters' components were already derived, e.g. in transformer like equivalent models. Nevertheless, the assumptions usually made may limit these models' application, as shown in the paper. The proposed methodology obviates these limitations. Results are validated through simulations.

  14. Reverse engineering of inductive fault current limiters

    International Nuclear Information System (INIS)

    Pina, J M; Neves, M Ventim; Rodrigues, A L; Suarez, P; Alvarez, A

    2010-01-01

    The inductive fault current limiter is less compact and harder to scale to high voltage networks than the resistive one. Nevertheless, its simple construction and mechanical robustness make it attractive in low voltage grids. Thus, it might be an enabling technology for the advent of microgrids, low voltage networks with dispersed generation, controllable loads and energy storage. A new methodology for reverse engineering of inductive fault current limiters based on the independent analysis of iron cores and HTS cylinders is presented in this paper. Their electromagnetic characteristics are used to predict the devices' hysteresis loops and consequently their dynamic behavior. Previous models based on the separate analysis of the limiters' components were already derived, e.g. in transformer like equivalent models. Nevertheless, the assumptions usually made may limit these models' application, as shown in the paper. The proposed methodology obviates these limitations. Results are validated through simulations.

  15. A Framework and Classification for Fault Detection Approaches in Wireless Sensor Networks with an Energy Efficiency Perspective

    DEFF Research Database (Denmark)

    Zhang, Yue; Dragoni, Nicola; Wang, Jiangtao

    2015-01-01

    efficiency to facilitate the design of fault detection methods and the evaluation of their energy efficiency. Following the same design principle of the fault detection framework, the paper proposes a classification for fault detection approaches. The classification is applied to a number of fault detection...

  16. Process plant alarm diagnosis using synthesised fault tree knowledge

    International Nuclear Information System (INIS)

    Trenchard, A.J.

    1990-01-01

    The development of computer based tools, to assist process plant operators in their task of fault/alarm diagnosis, has received much attention over the last twenty five years. More recently, with the emergence of Artificial Intelligence (AI) technology, the research activity in this subject area has heightened. As a result, there are a great variety of fault diagnosis methodologies, using many different approaches to represent the fault propagation behaviour of process plant. These range in complexity from steady state quantitative models to more abstract definitions of the relationships between process alarms. Unfortunately, very few of the techniques have been tried and tested on process plant and even fewer have been judged to be commercial successes. One of the outstanding problems still remains the time and effort required to understand and model the fault propagation behaviour of each considered process. This thesis describes the development of an experimental knowledge based system (KBS) to diagnose process plant faults, as indicated by process variable alarms. In an attempt to minimise the modelling effort, the KBS has been designed to infer diagnoses using a fault tree representation of the process behaviour, generated using an existing fault tree synthesis package (FAULTFINDER). The process is described to FAULTFINDER as a configuration of unit models, derived from a standard model library or by tailoring existing models. The resultant alarm diagnosis methodology appears to work well for hard (non-rectifying) faults, but is likely to be less robust when attempting to diagnose intermittent faults and transient behaviour. The synthesised fault trees were found to contain the bulk of the information required for the diagnostic task, however, this needed to be augmented with extra information in certain circumstances. (author)

  17. Risk-based fault detection using Self-Organizing Map

    International Nuclear Information System (INIS)

    Yu, Hongyang; Khan, Faisal; Garaniya, Vikram

    2015-01-01

    The complexity of modern systems is increasing rapidly and the dominating relationships among system variables have become highly non-linear. This results in difficulty in the identification of a system's operating states. In turn, this difficulty affects the sensitivity of fault detection and imposes a challenge on ensuring the safety of operation. In recent years, Self-Organizing Maps has gained popularity in system monitoring as a robust non-linear dimensionality reduction tool. Self-Organizing Map is able to capture non-linear variations of the system. Therefore, it is sensitive to the change of a system's states leading to early detection of fault. In this paper, a new approach based on Self-Organizing Map is proposed to detect and assess the risk of fault. In addition, probabilistic analysis is applied to characterize the risk of fault into different levels according to the hazard potential to enable a refined monitoring of the system. The proposed approach is applied on two experimental systems. The results from both systems have shown high sensitivity of the proposed approach in detecting and identifying the root cause of faults. The refined monitoring facilitates the determination of the risk of fault and early deployment of remedial actions and safety measures to minimize the potential impact of fault. - Highlights: • A new approach based on Self-Organizing Map is proposed to detect faults. • Integration of fault detection with risk assessment methodology. • Fault risk characterization into different levels to enable focused system monitoring

  18. Development of direct dating methods of fault gouges: Deep drilling into Nojima Fault, Japan

    Science.gov (United States)

    Miyawaki, M.; Uchida, J. I.; Satsukawa, T.

    2017-12-01

    It is crucial to develop a direct dating method of fault gouges for the assessment of recent fault activity in terms of site evaluation for nuclear power plants. This method would be useful in regions without Late Pleistocene overlying sediments. In order to estimate the age of the latest fault slip event, it is necessary to use fault gouges which have experienced high frictional heating sufficient for age resetting. It is said that frictional heating is higher in deeper depths, because frictional heating generated by fault movement is determined depending on the shear stress. Therefore, we should determine the reliable depth of age resetting, as it is likely that fault gouges from the ground surface have been dated to be older than the actual age of the latest fault movement due to incomplete resetting. In this project, we target the Nojima fault which triggered the 1995 Kobe earthquake in Japan. Samples are collected from various depths (300-1,500m) by trenching and drilling to investigate age resetting conditions and depth using several methods including electron spin resonance (ESR) and optical stimulated luminescence (OSL), which are applicable to ages later than the Late Pleistocene. The preliminary results by the ESR method show approx. 1.1 Ma1) at the ground surface and 0.15-0.28 Ma2) at 388 m depth, respectively. These results indicate that samples from deeper depths preserve a younger age. In contrast, the OSL method dated approx. 2,200 yr1) at the ground surface. Although further consideration is still needed as there is a large margin of error, this result indicates that the age resetting depth of OSL is relatively shallow due to the high thermosensitivity of OSL compare to ESR. In the future, we plan to carry out further investigation for dating fault gouges from various depths up to approx. 1,500 m to verify the use of these direct dating methods.1) Kyoto University, 2017. FY27 Commissioned for the disaster presentation on nuclear facilities (Drilling

  19. A weighted dissimilarity index to isolate faults during alarm floods

    CERN Document Server

    Charbonnier, S; Gayet, P

    2015-01-01

    A fault-isolation method based on pattern matching using the alarm lists raised by the SCADA system during an alarm flood is proposed. A training set composed of faults is used to create fault templates. Alarm vectors generated by unknown faults are classified by comparing them with the fault templates using an original weighted dissimilarity index that increases the influence of the few alarms relevant to diagnose the fault. Different decision strategies are proposed to support the operator in his decision making. The performances are evaluated on two sets of data: an artificial set and a set obtained from a highly realistic simulator of the CERN Large Hadron Collider process connected to the real CERN SCADA system.

  20. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  1. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  2. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  3. Active Fault Isolation in MIMO Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    isolation is based directly on the input/output s ignals applied for the fault detection. It is guaranteed that the fault group includes the fault that had occurred in the system. The second step is individual fault isolation in the fault group . Both types of isolation are obtained by applying dedicated......Active fault isolation of parametric faults in closed-loop MIMO system s are considered in this paper. The fault isolation consists of two steps. T he first step is group- wise fault isolation. Here, a group of faults is isolated from other pos sible faults in the system. The group-wise fault...

  4. Robust control design verification using the modular modeling system

    International Nuclear Information System (INIS)

    Edwards, R.M.; Ben-Abdennour, A.; Lee, K.Y.

    1991-01-01

    The Modular Modeling System (B ampersand W MMS) is being used as a design tool to verify robust controller designs for improving power plant performance while also providing fault-accommodating capabilities. These controllers are designed based on optimal control theory and are thus model based controllers which are targeted for implementation in a computer based digital control environment. The MMS is being successfully used to verify that the controllers are tolerant of uncertainties between the plant model employed in the controller and the actual plant; i.e., that they are robust. The two areas in which the MMS is being used for this purpose is in the design of (1) a reactor power controller with improved reactor temperature response, and (2) the design of a multiple input multiple output (MIMO) robust fault-accommodating controller for a deaerator level and pressure control problem

  5. Reconfigurable fault tolerant avionics system

    Science.gov (United States)

    Ibrahim, M. M.; Asami, K.; Cho, Mengu

    This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

  6. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  7. Scan cell design for enhanced delay fault testability

    NARCIS (Netherlands)

    van Brakel, Gerrit; van Brakel, G.; Xing, Yizi; Xing, Y.; Kerkhoff, Hans G.

    1992-01-01

    Problems in testing scannable sequential circuits for delay faults are addressed. Modifications to improve circuit controllability and observability for the testing of delay faults are implemented efficiently in a scan cell design. A layout on a gate array is designed and evaluated for this scan

  8. Nature and continuity of the Sundance Fault, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Potter, Christopher J.; Dickerson, Robert P.; Day, Warren C.

    2000-01-01

    This report describes the detailed geologic mapping (1:2,400 scale) that was performed in the northern part of the potential nuclear waste repository area at Yucca Mountain, Nevada, to determine the nature and extent of the Sundance Fault zone and to evaluate structural relations between the Sundance and other faults

  9. Fault Detection for Industrial Processes

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    2012-01-01

    Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

  10. Fault tree analysis

    International Nuclear Information System (INIS)

    1981-09-01

    Suggestion are made concerning the method of the fault tree analysis, the use of certain symbols in the examination of system failures. This purpose of the fault free analysis is to find logical connections of component or subsystem failures leading to undesirable occurrances. The results of these examinations are part of the system assessment concerning operation and safety. The objectives of the analysis are: systematical identification of all possible failure combinations (causes) leading to a specific undesirable occurrance, finding of reliability parameters such as frequency of failure combinations, frequency of the undesirable occurrance or non-availability of the system when required. The fault tree analysis provides a near and reconstructable documentation of the examination. (orig./HP) [de

  11. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-01-01

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431

  12. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  13. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  14. Fault Tolerant Computer Architecture

    CERN Document Server

    Sorin, Daniel

    2009-01-01

    For many years, most computer architects have pursued one primary goal: performance. Architects have translated the ever-increasing abundance of ever-faster transistors provided by Moore's law into remarkable increases in performance. Recently, however, the bounty provided by Moore's law has been accompanied by several challenges that have arisen as devices have become smaller, including a decrease in dependability due to physical faults. In this book, we focus on the dependability challenge and the fault tolerance solutions that architects are developing to overcome it. The two main purposes

  15. Fault tolerant linear actuator

    Science.gov (United States)

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  16. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  17. Assessment of Process Robustness for Mass Customization

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev

    2013-01-01

    robustness and their capability to develop it. Through literature study and analysis of robust process design characteristics a number of metrics are described which can be used for assessment. The metrics are evaluated and analyzed to be applied as KPI’s to help MC companies prioritize efforts in business...

  18. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

  19. Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines

    Science.gov (United States)

    Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin

    2018-03-01

    In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.

  20. Fault management and systems knowledge

    Science.gov (United States)

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  1. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2002-03-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene

  2. Fault diagnosis of induction motors

    CERN Document Server

    Faiz, Jawad; Joksimović, Gojko

    2017-01-01

    This book is a comprehensive, structural approach to fault diagnosis strategy. The different fault types, signal processing techniques, and loss characterisation are addressed in the book. This is essential reading for work with induction motors for transportation and energy.

  3. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2002-03-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene.

  4. Prestate of Stress and Fault Behavior During the 2016 Kumamoto Earthquake (M7.3)

    Science.gov (United States)

    Matsumoto, Satoshi; Yamashita, Yusuke; Nakamoto, Manami; Miyazaki, Masahiro; Sakai, Shinichi; Iio, Yoshihisa; Shimizu, Hiroshi; Goto, Kazuhiko; Okada, Tomomi; Ohzono, Mako; Terakawa, Toshiko; Kosuga, Masahiro; Yoshimi, Masayuki; Asano, Youichi

    2018-01-01

    Fault behavior during an earthquake is controlled by the state of stress on the fault. Complex coseismic fault slip on large earthquake faults has recently been observed by dense seismic networks, which complicates strong motion evaluations for potential faults. Here we show the three-dimensional prestress field related to the 2016 Kumamoto earthquake. The estimated stress field reveals a spatially variable state of stress that forced the fault to slip in a direction predicted by the "Wallace and Bott Hypothesis." The stress field also exposes the pre-condition of pore fluid pressure on the fault. Large coseismic slip occurred in the low-pressure part of the fault. However, areas with highly pressured fluid also showed large displacement, indicating that the seismic moment of the earthquake was magnified by fluid pressure. These prerupture data could contribute to improved seismic hazard evaluations.

  5. Fault tree analysis for urban flooding

    NARCIS (Netherlands)

    Ten Veldhuis, J.A.E.; Clemens, F.H.L.R.; Van Gelder, P.H.A.J.M.

    2008-01-01

    Traditional methods to evaluate flood risk mostly focus on storm events as the main cause of flooding. Fault tree analysis is a technique that is able to model all potential causes of flooding and to quantify both the overall probability of flooding and the contributions of all causes of flooding to

  6. Commercial application of fault tree analysis

    International Nuclear Information System (INIS)

    Crosetti, P.A.; Bruce, R.A.

    1970-01-01

    The potential for general application of Fault Tree Analysis to commercial products appears attractive based not only on the successful extension from the aerospace safety technology to the nuclear reactor reliability and availability technology, but also because combinatorial hazards are common to commercial operations and therefore lend themselves readily to evaluation by Fault Tree Analysis. It appears reasonable to conclude that the technique has application within the commercial industrial community where the occurrence of a specified consequence or final event would be of sufficient concern to management to justify such a rigorous analysis as an aid to decision making. (U.S.)

  7. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  8. Fault Tolerant Wind Farm Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2013-01-01

    In the recent years the wind turbine industry has focused on optimizing the cost of energy. One of the important factors in this is to increase reliability of the wind turbines. Advanced fault detection, isolation and accommodation are important tools in this process. Clearly most faults are deal...... scenarios. This benchmark model is used in an international competition dealing with Wind Farm fault detection and isolation and fault tolerant control....

  9. Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.

    Science.gov (United States)

    1995-06-01

    DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth

  10. Row fault detection system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  11. Fault isolation techniques

    Science.gov (United States)

    Dumas, A.

    1981-01-01

    Three major areas that are considered in the development of an overall maintenance scheme of computer equipment are described. The areas of concern related to fault isolation techniques are: the programmer (or user), company and its policies, and the manufacturer of the equipment.

  12. Fault Tolerant Control Systems

    DEFF Research Database (Denmark)

    Bøgh, S. A.

    This thesis considered the development of fault tolerant control systems. The focus was on the category of automated processes that do not necessarily comprise a high number of identical sensors and actuators to maintain safe operation, but still have a potential for improving immunity to component...

  13. Alternative model of thrust-fault propagation

    Science.gov (United States)

    Eisenstadt, Gloria; de Paor, Declan G.

    1987-07-01

    A widely accepted explanation for the geometry of thrust faults is that initial failures occur on deeply buried planes of weak rock and that thrust faults propagate toward the surface along a staircase trajectory. We propose an alternative model that applies Gretener's beam-failure mechanism to a multilayered sequence. Invoking compatibility conditions, which demand that a thrust propagate both upsection and downsection, we suggest that ramps form first, at shallow levels, and are subsequently connected by flat faults. This hypothesis also explains the formation of many minor structures associated with thrusts, such as backthrusts, wedge structures, pop-ups, and duplexes, and provides a unified conceptual framework in which to evaluate field observations.

  14. Fault-Related Sanctuaries

    Science.gov (United States)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  15. Fault Locating, Prediction and Protection (FLPPS)

    Energy Technology Data Exchange (ETDEWEB)

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  16. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  17. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    Science.gov (United States)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  18. Study on conditional probability of surface rupture: effect of fault dip and width of seismogenic layer

    Science.gov (United States)

    Inoue, N.

    2017-12-01

    The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source

  19. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  20. Interactive animation of fault-tolerant parallel algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Apgar, S.W.

    1992-02-01

    Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes.

  1. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  2. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  3. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  4. Early fault detection and diagnosis for nuclear power plants

    International Nuclear Information System (INIS)

    Berg, O.; Grini, R.; Masao Yokobayashi

    1988-01-01

    Fault detection based on a number of reference models is demonstrated. This approach is characterized by the possibility of detecting faults before a traditional alarm system is triggered, even in dynamic situations. Further, by a proper decomposition scheme and use of available process measurements, the problem area can be confined to the faulty process parts. A diagnosis system using knowledge engineering techniques is described. Typical faults are classified and described by rules involving alarm patterns and variations of important parameters. By structuring the fault hypotheses in a hierarchy the search space is limited which is important for real time diagnosis. Introduction of certainty factors improve the flexibility and robustness of diagnosis by exploring parallel problems even when some data are missing. A new display proposal should facilitate the operator interface and the integration of fault detection and diagnosis tasks in disturbance handling. The techniques of early fault detection and diagnosis are presently being implemented and tested in the experimental control room of a full-scope PWR simulator in Halden

  5. Fault detection and reliability, knowledge based and other approaches

    International Nuclear Information System (INIS)

    Singh, M.G.; Hindi, K.S.; Tzafestas, S.G.

    1987-01-01

    These proceedings are split up into four major parts in order to reflect the most significant aspects of reliability and fault detection as viewed at present. The first part deals with knowledge-based systems and comprises eleven contributions from leading experts in the field. The emphasis here is primarily on the use of artificial intelligence, expert systems and other knowledge-based systems for fault detection and reliability. The second part is devoted to fault detection of technological systems and comprises thirteen contributions dealing with applications of fault detection techniques to various technological systems such as gas networks, electric power systems, nuclear reactors and assembly cells. The third part of the proceedings, which consists of seven contributions, treats robust, fault tolerant and intelligent controllers and covers methodological issues as well as several applications ranging from nuclear power plants to industrial robots to steel grinding. The fourth part treats fault tolerant digital techniques and comprises five contributions. Two papers, one on reactor noise analysis, the other on reactor control system design, are indexed separately. (author)

  6. Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform.

    Science.gov (United States)

    Pang, Bin; Tang, Guiji; Tian, Tian; Zhou, Chong

    2018-04-14

    When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time-time (IHTT) transform, by combining a Hilbert time-time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures.

  7. Fault Diagnosis of Motor Bearing by Analyzing a Video Clip

    Directory of Open Access Journals (Sweden)

    Siliang Lu

    2016-01-01

    Full Text Available Conventional bearing fault diagnosis methods require specialized instruments to acquire signals that can reflect the health condition of the bearing. For instance, an accelerometer is used to acquire vibration signals, whereas an encoder is used to measure motor shaft speed. This study proposes a new method for simplifying the instruments for motor bearing fault diagnosis. Specifically, a video clip recording of a running bearing system is captured using a cellphone that is equipped with a camera and a microphone. The recorded video is subsequently analyzed to obtain the instantaneous frequency of rotation (IFR. The instantaneous fault characteristic frequency (IFCF of the defective bearing is obtained by analyzing the sound signal that is recorded by the microphone. The fault characteristic order is calculated by dividing IFCF by IFR to identify the fault type of the bearing. The effectiveness and robustness of the proposed method are verified by a series of experiments. This study provides a simple, flexible, and effective solution for motor bearing fault diagnosis. Given that the signals are gathered using an affordable and accessible cellphone, the proposed method is proven suitable for diagnosing the health conditions of bearing systems that are located in remote areas where specialized instruments are unavailable or limited.

  8. Fault-tolerant computing systems

    International Nuclear Information System (INIS)

    Dal Cin, M.; Hohl, W.

    1991-01-01

    Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

  9. Fault rocks and uranium mineralization

    International Nuclear Information System (INIS)

    Tong Hangshou.

    1991-01-01

    The types of fault rocks, microstructural characteristics of fault tectonite and their relationship with uranium mineralization in the uranium-productive granite area are discussed. According to the synthetic analysis on nature of stress, extent of crack and microstructural characteristics of fault rocks, they can be classified into five groups and sixteen subgroups. The author especially emphasizes the control of cataclasite group and fault breccia group over uranium mineralization in the uranium-productive granite area. It is considered that more effective study should be made on the macrostructure and microstructure of fault rocks. It is of an important practical significance in uranium exploration

  10. Network Fault Diagnosis Using DSM

    Institute of Scientific and Technical Information of China (English)

    Jiang Hao; Yan Pu-liu; Chen Xiao; Wu Jing

    2004-01-01

    Difference similitude matrix (DSM) is effective in reducing information system with its higher reduction rate and higher validity. We use DSM method to analyze the fault data of computer networks and obtain the fault diagnosis rules. Through discretizing the relative value of fault data, we get the information system of the fault data. DSM method reduces the information system and gets the diagnosis rules. The simulation with the actual scenario shows that the fault diagnosis based on DSM can obtain few and effective rules.

  11. On robust forecasting of autoregressive time series under censoring

    OpenAIRE

    Kharin, Y.; Badziahin, I.

    2009-01-01

    Problems of robust statistical forecasting are considered for autoregressive time series observed under distortions generated by interval censoring. Three types of robust forecasting statistics are developed; meansquare risk is evaluated for the developed forecasting statistics. Numerical results are given.

  12. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  13. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  14. Active Fault Tolerant Control for Ultrasonic Piezoelectric Motor

    Science.gov (United States)

    Boukhnifer, Moussa

    2012-07-01

    Ultrasonic piezoelectric motor technology is an important system component in integrated mechatronics devices working on extreme operating conditions. Due to these constraints, robustness and performance of the control interfaces should be taken into account in the motor design. In this paper, we apply a new architecture for a fault tolerant control using Youla parameterization for an ultrasonic piezoelectric motor. The distinguished feature of proposed controller architecture is that it shows structurally how the controller design for performance and robustness may be done separately which has the potential to overcome the conflict between performance and robustness in the traditional feedback framework. A fault tolerant control architecture includes two parts: one part for performance and the other part for robustness. The controller design works in such a way that the feedback control system will be solely controlled by the proportional plus double-integral PI2 performance controller for a nominal model without disturbances and H∞ robustification controller will only be activated in the presence of the uncertainties or an external disturbances. The simulation results demonstrate the effectiveness of the proposed fault tolerant control architecture.

  15. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  16. Incipient fault detection and identification in process systems using accelerating neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Muthusami, J.; Atiya, A.F.

    1994-01-01

    The objective of this paper is to present the development and numerical testing of a robust fault detection and identification (FDI) system using artificial neural networks (ANNs), for incipient (slowly developing) faults occurring in process systems. The challenge in using ANNs in FDI systems arises because of one's desire to detect faults of varying severity, faults from noisy sensors, and multiple simultaneous faults. To address these issues, it becomes essential to have a learning algorithm that ensures quick convergence to a high level of accuracy. A recently developed accelerated learning algorithm, namely a form of an adaptive back propagation (ABP) algorithm, is used for this purpose. The ABP algorithm is used for the development of an FDI system for a process composed of a direct current motor, a centrifugal pump, and the associated piping system. Simulation studies indicate that the FDI system has significantly high sensitivity to incipient fault severity, while exhibiting insensitivity to sensor noise. For multiple simultaneous faults, the FDI system detects the fault with the predominant signature. The major limitation of the developed FDI system is encountered when it is subjected to simultaneous faults with similar signatures. During such faults, the inherent limitation of pattern-recognition-based FDI methods becomes apparent. Thus, alternate, more sophisticated FDI methods become necessary to address such problems. Even though the effectiveness of pattern-recognition-based FDI methods using ANNs has been demonstrated, further testing using real-world data is necessary

  17. Fault-Tolerant NDE Data Reduction Framework, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A distributed fault tolerant nondestructive evaluation (NDE) data reduction framework is proposed in which large NDE datasets are mapped to thousands to millions of...

  18. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    Science.gov (United States)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  19. Subsurface structure of the Nojima fault from dipole shear velocity/anisotropy and borehole Stoneley wave

    Energy Technology Data Exchange (ETDEWEB)

    Ito, H [Geological Survey of Japan, Tsukuba (Japan); Yamamoto, H; Brie, A

    1996-10-01

    Fracture and permeability in the fault zone of the active fault drilling at the Nojima fault were evaluated from acoustic waveforms. There were several permeable intervals in the fault zone. There was strong Stoneley wave attenuation, very large S-Se below the fault and in the interval above the fault. In the fault zone, there were also several short intervals where S-Se was very large; 667 m-674 m and 706 m-710 m. In these intervals, the Stoneley attenuation was large, but there was no Stoneley reflection from within the interval. Reflections were observed at the upper and lower boundaries, going away from the bed up above, and down below. In this well, the shear wave was very strongly attenuated at and below the fault zone. The fast shear azimuth changed at the fault. The slowness anisotropy was fairly strong above the fault from 602 m to 612 m, but smaller below the fault. The changes in fast shear azimuth were much more pronounced near the fault, which suggested a strong influence of the fault. 6 refs., 5 figs.

  20. Observations on Faults and Associated Permeability Structures in Hydrogeologic Units at the Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Prothro, Lance B.; Drellack, Sigmund L.; Haugstad, Dawn N.; Huckins-Gang, Heather E.; Townsend, Margaret J.

    2009-03-30

    Observational data on Nevada Test Site (NTS) faults were gathered from a variety of sources, including surface and tunnel exposures, core samples, geophysical logs, and down-hole cameras. These data show that NTS fault characteristics and fault zone permeability structures are similar to those of faults studied in other regions. Faults at the NTS form complex and heterogeneous fault zones with flow properties that vary in both space and time. Flow property variability within fault zones can be broken down into four major components that allow for the development of a simplified, first approximation model of NTS fault zones. This conceptual model can be used as a general guide during development and evaluation of groundwater flow and contaminate transport models at the NTS.

  1. EKF-based fault detection for guided missiles flight control system

    Science.gov (United States)

    Feng, Gang; Yang, Zhiyong; Liu, Yongjin

    2017-03-01

    The guided missiles flight control system is essential for guidance accuracy and kill probability. It is complicated and fragile. Since actuator faults and sensor faults could seriously affect the security and reliability of the system, fault detection for missiles flight control system is of great significance. This paper deals with the problem of fault detection for the closed-loop nonlinear model of the guided missiles flight control system in the presence of disturbance. First, set up the fault model of flight control system, and then design the residual generation based on the extended Kalman filter (EKF) for the Eulerian-discrete fault model. After that, the Chi-square test was selected for the residual evaluation and the fault detention task for guided missiles closed-loop system was accomplished. Finally, simulation results are provided to illustrate the effectiveness of the approach proposed in the case of elevator fault separately.

  2. Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meenakshi Panda

    2014-01-01

    Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.

  3. Assessment of faulting and seismic hazards at Yucca Mountain

    International Nuclear Information System (INIS)

    King, J.L.; Frazier, G.A.; Grant, T.A.

    1989-01-01

    Yucca Mountain is being evaluated for the nation's first high-level nuclear-waste repository. Local faults appear to be capable of moderate earthquakes at recurrence intervals of tens of thousands of years. The major issues identified for the preclosure phase (<100 yrs) are the location and seismic design of surface facilities for handling incoming waste. It is planned to address surface fault rupture by locating facilities where no discernible recent (<100,000 yrs) faulting has occurred and to base the ground motion design on hypothetical earthquakes, postulated on nearby faults, that represent 10,000 yrs of average cumulative displacement. The major tectonic issues identified for the postclosure phase (10,000 yrs) are volcanism (not addressed here) and potential changes to the hydrologic system resulting from a local faulting event which could trigger potential thermal, mechanical, and chemical interactions with the ground water. Extensive studies are planned for resolving these issues. 33 refs., 3 figs

  4. Planetary Gearbox Fault Detection Using Vibration Separation Techniques

    Science.gov (United States)

    Lewicki, David G.; LaBerge, Kelsen E.; Ehinger, Ryan T.; Fetty, Jason

    2011-01-01

    Studies were performed to demonstrate the capability to detect planetary gear and bearing faults in helicopter main-rotor transmissions. The work supported the Operations Support and Sustainment (OSST) program with the U.S. Army Aviation Applied Technology Directorate (AATD) and Bell Helicopter Textron. Vibration data from the OH-58C planetary system were collected on a healthy transmission as well as with various seeded-fault components. Planetary fault detection algorithms were used with the collected data to evaluate fault detection effectiveness. Planet gear tooth cracks and spalls were detectable using the vibration separation techniques. Sun gear tooth cracks were not discernibly detectable from the vibration separation process. Sun gear tooth spall defects were detectable. Ring gear tooth cracks were only clearly detectable by accelerometers located near the crack location or directly across from the crack. Enveloping provided an effective method for planet bearing inner- and outer-race spalling fault detection.

  5. Faults in Linux

    DEFF Research Database (Denmark)

    Palix, Nicolas Jean-Michel; Thomas, Gaël; Saha, Suman

    2011-01-01

    In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

  6. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  7. Active fault and other geological studies for seismic assessment: present state and problems

    International Nuclear Information System (INIS)

    Kakimi, Toshihiro

    1997-01-01

    Evaluation system of earthquakes from an active fault is, in Japan, based on the characteristic earthquake model of a wide sense that postulates essentially the same (nearly the maximum) magnitude and recurrence interval during the recent geological times. Earthquake magnitude M is estimated by empirical relations among M, surface rupture length L, and surface fault displacement D per event of the earthquake faults on land in Japan. Recurrence interval R of faulting/earthquake is calculated from D and the long-term slip rate S of a fault as R=D/S. Grouping or segmentation of complicatedly distributed faults is an important, but difficult problem in order to distinguish a seismogenic fault unit corresponding to an individual characteristic earthquake. If the time t of the latest event is obtained, the 'cautiousness' of a fault can be judged from R-t or t/R. According to this idea, several faults whose t/R exceed 0.5 have been designated as the 'precaution faults' having higher probability of earthquake occurrence than the others. A part of above evaluation has been introduced at first into the seismic-safety examination system of NPPs in 1978. According to the progress of research on active faults, the weight of interest in respect to the seismic hazard assessment shifted gradually from the historic data to the fault data. Most of recent seismic hazard maps have been prepared in consideration with active faults on land in Japan. Since the occurrence of the 1995 Hyogoken-Nanbu earthquake, social attention has been concentrated upon the seismic hazard due to active faults, because this event was generated from a well-known active fault zone that had been warned as a 'precaution fault'. In this paper, a few recent topics on other geological and geotechnical researches aiming at improving the seismic safety of NPPs in Japan were also introduced. (J.P.N.)

  8. Active fault and other geological studies for seismic assessment: present state and problems

    Energy Technology Data Exchange (ETDEWEB)

    Kakimi, Toshihiro [Nuclear Power Engineering Corp., Tokyo (Japan)

    1997-03-01

    Evaluation system of earthquakes from an active fault is, in Japan, based on the characteristic earthquake model of a wide sense that postulates essentially the same (nearly the maximum) magnitude and recurrence interval during the recent geological times. Earthquake magnitude M is estimated by empirical relations among M, surface rupture length L, and surface fault displacement D per event of the earthquake faults on land in Japan. Recurrence interval R of faulting/earthquake is calculated from D and the long-term slip rate S of a fault as R=D/S. Grouping or segmentation of complicatedly distributed faults is an important, but difficult problem in order to distinguish a seismogenic fault unit corresponding to an individual characteristic earthquake. If the time t of the latest event is obtained, the `cautiousness` of a fault can be judged from R-t or t/R. According to this idea, several faults whose t/R exceed 0.5 have been designated as the `precaution faults` having higher probability of earthquake occurrence than the others. A part of above evaluation has been introduced at first into the seismic-safety examination system of NPPs in 1978. According to the progress of research on active faults, the weight of interest in respect to the seismic hazard assessment shifted gradually from the historic data to the fault data. Most of recent seismic hazard maps have been prepared in consideration with active faults on land in Japan. Since the occurrence of the 1995 Hyogoken-Nanbu earthquake, social attention has been concentrated upon the seismic hazard due to active faults, because this event was generated from a well-known active fault zone that had been warned as a `precaution fault`. In this paper, a few recent topics on other geological and geotechnical researches aiming at improving the seismic safety of NPPs in Japan were also introduced. (J.P.N.)

  9. Architecture of buried reverse fault zone in the sedimentary basin: A case study from the Hong-Che Fault Zone of the Junggar Basin

    Science.gov (United States)

    Liu, Yin; Wu, Kongyou; Wang, Xi; Liu, Bo; Guo, Jianxun; Du, Yannan

    2017-12-01

    comprehensive method in identifying the architecture of buried faults in the sedimentary basin and would be helpful in evaluating the fault sealing behavior.

  10. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2003-02-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene

  11. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  12. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2003-02-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene.

  13. The reflection of evolving bearing faults in the stator current's extended park vector approach for induction machines

    Science.gov (United States)

    Corne, Bram; Vervisch, Bram; Derammelaere, Stijn; Knockaert, Jos; Desmet, Jan

    2018-07-01

    Stator current analysis has the potential of becoming the most cost-effective condition monitoring technology regarding electric rotating machinery. Since both electrical and mechanical faults are detected by inexpensive and robust current-sensors, measuring current is advantageous on other techniques such as vibration, acoustic or temperature analysis. However, this technology is struggling to breach into the market of condition monitoring as the electrical interpretation of mechanical machine-problems is highly complicated. Recently, the authors built a test-rig which facilitates the emulation of several representative mechanical faults on an 11 kW induction machine with high accuracy and reproducibility. Operating this test-rig, the stator current of the induction machine under test can be analyzed while mechanical faults are emulated. Furthermore, while emulating, the fault-severity can be manipulated adaptively under controllable environmental conditions. This creates the opportunity of examining the relation between the magnitude of the well-known current fault components and the corresponding fault-severity. This paper presents the emulation of evolving bearing faults and their reflection in the Extended Park Vector Approach for the 11 kW induction machine under test. The results confirm the strong relation between the bearing faults and the stator current fault components in both identification and fault-severity. Conclusively, stator current analysis increases reliability in the application as a complete, robust, on-line condition monitoring technology.

  14. Bayesian fault detection and isolation using Field Kalman Filter

    Science.gov (United States)

    Baranowski, Jerzy; Bania, Piotr; Prasad, Indrajeet; Cong, Tian

    2017-12-01

    Fault detection and isolation is crucial for the efficient operation and safety of any industrial process. There is a variety of methods from all areas of data analysis employed to solve this kind of task, such as Bayesian reasoning and Kalman filter. In this paper, the authors use a discrete Field Kalman Filter (FKF) to detect and recognize faulty conditions in a system. The proposed approach, devised for stochastic linear systems, allows for analysis of faults that can be expressed both as parameter and disturbance variations. This approach is formulated for the situations when the fault catalog is known, resulting in the algorithm allowing estimation of probability values. Additionally, a variant of algorithm with greater numerical robustness is presented, based on computation of logarithmic odds. Proposed algorithm operation is illustrated with numerical examples, and both its merits and limitations are critically discussed and compared with traditional EKF.

  15. Real-time fault diagnosis and fault-tolerant control

    OpenAIRE

    Gao, Zhiwei; Ding, Steven X.; Cecati, Carlo

    2015-01-01

    This "Special Section on Real-Time Fault Diagnosis and Fault-Tolerant Control" of the IEEE Transactions on Industrial Electronics is motivated to provide a forum for academic and industrial communities to report recent theoretic/application results in real-time monitoring, diagnosis, and fault-tolerant design, and exchange the ideas about the emerging research direction in this field. Twenty-three papers were eventually selected through a strict peer-reviewed procedure, which represent the mo...

  16. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  17. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  18. Towards Robust Predictive Fault–Tolerant Control for a Battery Assembly System

    Directory of Open Access Journals (Sweden)

    Seybold Lothar

    2015-12-01

    Full Text Available The paper deals with the modeling and fault-tolerant control of a real battery assembly system which is under implementation at the RAFI GmbH company (one of the leading electronic manufacturing service providers in Germany. To model and control the battery assembly system, a unified max-plus algebra and model predictive control framework is introduced. Subsequently, the control strategy is enhanced with fault-tolerance features that increase the overall performance of the production system being considered. In particular, it enables tolerating (up to some degree mobile robot, processing and transportation faults. The paper discusses also robustness issues, which are inevitable in real production systems. As a result, a novel robust predictive fault-tolerant strategy is developed that is applied to the battery assembly system. The last part of the paper shows illustrative examples, which clearly exhibit the performance of the proposed approach.

  19. Fault Diagnosis for Actuators in a Class of Nonlinear Systems Based on an Adaptive Fault Detection Observer

    Directory of Open Access Journals (Sweden)

    Runxia Guo

    2016-01-01

    Full Text Available The problem of actuators’ fault diagnosis is pursued for a class of nonlinear control systems that are affected by bounded measurement noise and external disturbances. A novel fault diagnosis algorithm has been proposed by combining the idea of adaptive control theory and the approach of fault detection observer. The asymptotical stability of the fault detection observer is guaranteed by setting the adaptive adjusting law of the unknown fault vector. A theoretically rigorous proof of asymptotical stability has been given. Under the condition that random measurement noise generated by the sensors of control systems and external disturbances exist simultaneously, the designed fault diagnosis algorithm is able to successfully give specific estimated values of state variables and failures rather than just giving a simple fault warning. Moreover, the proposed algorithm is very simple and concise and is easy to be applied to practical engineering. Numerical experiments are carried out to evaluate the performance of the fault diagnosis algorithm. Experimental results show that the proposed diagnostic strategy has a satisfactory estimation effect.

  20. Frequency of fault occurrence at shallow depths during Plio-Pleistocene and estimation of the incident of new faults

    International Nuclear Information System (INIS)

    Shiratsuchi, H.; Yoshida, S.

    2009-01-01

    It is required that buried high-level radioactive wastes should not be broken directly by faulting in the future. Although a disposal site will be selected in an area where no active faults are present, the possibility of new fault occurrence in the site has to be evaluated. The probability of new fault occurrence is estimated from the frequency of faults which exist in Pliocene and Pleistocene strata distributed beneath 3 large plains in Japan, where a large number of seismic profiles and borehole data are obtained. Estimation of the frequency of faults having occurred and/or reached at shallow depth during Plio-Pleistocene time. The frequency of fault occurrence was estimated by counting the number of faults that exist in Plio-Pleistocene strata that are widely distributed in large plains in Japan. Three plains, Kanto, Nobi and Osaka Plains are selected for this purpose because highly precise geological profiles, which were prepared from numerous geological drillings and geophysical investigations, are available in them. (authors)

  1. ASCS online fault detection and isolation based on an improved MPCA

    Science.gov (United States)

    Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan

    2014-09-01

    Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.

  2. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan

    2017-05-31

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  3. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan; Hanafy, Sherif; Guo, Bowen; Kosmicki, Maximillian Sunflower

    2017-01-01

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  4. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  5. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  6. Is lithostatic loading important for the slip behavior and evolution of normal faults in the Earth's crust?

    International Nuclear Information System (INIS)

    Kattenhorn, Simon A.; Pollard, David D.

    1999-01-01

    Normal faults growing in the Earth's crust are subject to the effects of an increasing frictional resistance to slip caused by the increasing lithostatic load with depth. We use three-dimensional (3-D) boundary element method numerical models to evaluate these effects on planar normal faults with variable elliptical tip line shapes in an elastic solid. As a result of increasing friction with depth, normal fault slip maxima for a single slip event are skewed away from the fault center toward the upper fault tip. There is a correspondingly greater propagation tendency at the upper tip. However, the tall faults that would result from such a propagation tendency are generally not observed in nature. We show how mechanical interaction between laterally stepping fault segments significantly competes with the lithostatic loading effect in the evolution of a normal fault system, promoting lateral propagation and possibly segment linkage. Resultant composite faults are wider than they are tall, resembling both 3-D seismic data interpretations and previously documented characteristics of normal fault systems. However, this effect may be greatly complemented by the influence of a heterogeneous stratigraphy, which can control fault nucleation depth and inhibit fault propagation across the mechanical layering. Our models demonstrate that although lithostatic loading may be an important control on fault evolution in relatively homogeneous rocks, the contribution of lithologic influences and mechanical interaction between closely spaced, laterally stepping faults may predominate in determining the slip behavior and propagation tendency of normal faults in the Earth's crust. (c) 1999 American Geophysical Union

  7. Wilshire fault: Earthquakes in Hollywood?

    Science.gov (United States)

    Hummon, Cheryl; Schneider, Craig L.; Yeats, Robert S.; Dolan, James F.; Sieh, Kerry E.; Huftile, Gary J.

    1994-04-01

    The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

  8. What is Fault Tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Frei, C. W.; Kraus, K.

    2000-01-01

    Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to the plant, to personnel or the environment. Fault-tolerant control is the synonym for a set of recent techniques that were developed to increase plant...... availability and reduce the risk of safety hazards. Its aim is to prevent that simple faults develop into serious failure. Fault-tolerant control merges several disciplines to achieve this goal, including on-line fault diagnosis, automatic condition assessment and calculation of remedial actions when a fault...... is detected. The envelope of the possible remedial actions is wide. This paper introduces tools to analyze and explore structure and other fundamental properties of an automated system such that any redundancy in the process can be fully utilized to enhance safety and a availability....

  9. Preservation of amorphous ultrafine material: A proposed proxy for slip during recent earthquakes on active faults.

    Science.gov (United States)

    Hirono, Tetsuro; Asayama, Satoru; Kaneki, Shunya; Ito, Akihiro

    2016-11-09

    The criteria for designating an "Active Fault" not only are important for understanding regional tectonics, but also are a paramount issue for assessing the earthquake risk of faults that are near important structures such as nuclear power plants. Here we propose a proxy, based on the preservation of amorphous ultrafine particles, to assess fault activity within the last millennium. X-ray diffraction data and electron microscope observations of samples from an active fault demonstrated the preservation of large amounts of amorphous ultrafine particles in two slip zones that last ruptured in 1596 and 1999, respectively. A chemical kinetic evaluation of the dissolution process indicated that such particles could survive for centuries, which is consistent with the observations. Thus, preservation of amorphous ultrafine particles in a fault may be valuable for assessing the fault's latest activity, aiding efforts to evaluate faults that may damage critical facilities in tectonically active zones.

  10. A Systematic Methodology for Gearbox Health Assessment and Fault Classification

    Directory of Open Access Journals (Sweden)

    Jay Lee

    2011-01-01

    Full Text Available A systematic methodology for gearbox health assessment and fault classification is developed and evaluated for 560 data sets of gearbox vibration data provided by the Prognostics and Health Management Society for the 2009 data challenge competition. A comprehensive set of signal processing and feature extraction methods are used to extract over 200 features, including features extracted from the raw time signal, time synchronous signal, wavelet decomposition signal, frequency domain spectrum, envelope spectrum, among others. A regime segmentation approach using the tachometer signal, a spectrum similarity metric, and gear mesh frequency peak information are used to segment the data by gear type, input shaft speed, and braking torque load. A health assessment method that finds the minimum feature vector sum in each regime is used to classify and find the 80 baseline healthy data sets. A fault diagnosis method based on a distance calculation from normal along with specific features correlated to different fault signatures is used to diagnosis specific faults. The fault diagnosis method is evaluated for the diagnosis of a gear tooth breakage, input shaft imbalance, bent shaft, bearing inner race defect, and bad key, and the method could be further extended for other faults as long as a set of features can be correlated with a known fault signature. Future work looks to further refine the distance calculation algorithm for fault diagnosis, as well as further evaluate other signal processing method such as the empirical mode decomposition to see if an improved set of features can be used to improve the fault diagnosis accuracy.

  11. Design Robust Controller for Rotary Kiln

    Directory of Open Access Journals (Sweden)

    Omar D. Hernández-Arboleda

    2013-11-01

    Full Text Available This paper presents the design of a robust controller for a rotary kiln. The designed controller is a combination of a fractional PID and linear quadratic regulator (LQR, these are not used to control the kiln until now, in addition robustness criteria are evaluated (gain margin, phase margin, strength gain, rejecting high frequency noise and sensitivity applied to the entire model (controller-plant, obtaining good results with a frequency range of 0.020 to 90 rad/s, which contributes to the robustness of the system.

  12. Robustness-related issues in speaker recognition

    CERN Document Server

    Zheng, Thomas Fang

    2017-01-01

    This book presents an overview of speaker recognition technologies with an emphasis on dealing with robustness issues. Firstly, the book gives an overview of speaker recognition, such as the basic system framework, categories under different criteria, performance evaluation and its development history. Secondly, with regard to robustness issues, the book presents three categories, including environment-related issues, speaker-related issues and application-oriented issues. For each category, the book describes the current hot topics, existing technologies, and potential research focuses in the future. The book is a useful reference book and self-learning guide for early researchers working in the field of robust speech recognition.

  13. High level organizing principles for display of systems fault information for commercial flight crews

    Science.gov (United States)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  14. Robust surgery loading

    NARCIS (Netherlands)

    Hans, Elias W.; Wullink, Gerhard; van Houdenhoven, Mark; Kazemier, Geert

    2008-01-01

    We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This

  15. Robustness Envelopes of Networks

    NARCIS (Netherlands)

    Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.

    2013-01-01

    We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case

  16. Fault Detection Coverage Quantification of Automatic Test Functions of Digital I and C System in NPPs

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Lee, Seung Jun; Hur, Seop; Lee, Young Jun; Jang, Seung Cheol

    2011-01-01

    Recently, analog instrument and control (I and C) systems in nuclear power plants (NPPs) have been replaced with digital systems for safer and more efficient operations. Digital I and C systems have adopted various fault-tolerant techniques that help the system correctly and safely perform the specific required functions in spite of the presence of faults. Each fault-tolerant technique has a different inspection period from real-time monitoring to monthly testing. The range covered by each fault-tolerant technique is also different. The digital I and C system, therefore, adopts multiple barriers consisting of various fault-tolerant techniques to increase total fault detection coverage. Even though these fault-tolerant techniques are adopted to ensure and improve the safety of a system, their effects have not been properly considered yet in most PSA models. Therefore, it is necessary to develop an evaluation method that can describe these features of a digital I and C system. Several issues must be considered in the fault coverage estimation of a digital I and C system, and two of them were handled in this work. The first is to quantify the fault coverage of each fault-tolerant technique implemented in the system, and the second is to exclude the duplicated effect of fault-tolerant techniques implemented simultaneously at each level of the system's hierarchy, as a fault occurring in a system might be detected by one or more fault-tolerant techniques. For this work, fault injection experiment was used to obtain the exact relations between faults and multiple barriers of fault-tolerant techniques. This experiment was applied to a bistable processor (BP) of a reactor protection system

  17. Final Technical Report: PV Fault Detection Tool.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Christian Birk [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  18. A Fault Detection Filtering for Networked Control Systems Based on Balanced Reduced-Order

    Directory of Open Access Journals (Sweden)

    Da-Meng Dai

    2015-01-01

    Full Text Available Due to the probability of the packet dropout in the networked control systems, a balanced reduced-order fault detection filter is proposed. In this paper, we first analyze the packet dropout effects in the networked control systems. Then, in order to obtain a robust fault detector for the packet dropout, we use the balanced structure to construct a reduced-order model for residual dynamics. Simulation results are provided to testify the proposed method.

  19. Fault current limiter

    Science.gov (United States)

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  20. Inferring Fault Frictional and Reservoir Hydraulic Properties From Injection-Induced Seismicity

    Science.gov (United States)

    Jagalur-Mohan, Jayanth; Jha, Birendra; Wang, Zheng; Juanes, Ruben; Marzouk, Youssef

    2018-02-01

    Characterizing the rheological properties of faults and the evolution of fault friction during seismic slip are fundamental problems in geology and seismology. Recent increases in the frequency of induced earthquakes have intensified the need for robust methods to estimate fault properties. Here we present a novel approach for estimation of aquifer and fault properties, which combines coupled multiphysics simulation of injection-induced seismicity with adaptive surrogate-based Bayesian inversion. In a synthetic 2-D model, we use aquifer pressure, ground displacements, and fault slip measurements during fluid injection to estimate the dynamic fault friction, the critical slip distance, and the aquifer permeability. Our forward model allows us to observe nonmonotonic evolutions of shear traction and slip on the fault resulting from the interplay of several physical mechanisms, including injection-induced aquifer expansion, stress transfer along the fault, and slip-induced stress relaxation. This interplay provides the basis for a successful joint inversion of induced seismicity, yielding well-informed Bayesian posterior distributions of dynamic friction and critical slip. We uncover an inverse relationship between dynamic friction and critical slip distance, which is in agreement with the small dynamic friction and large critical slip reported during seismicity on mature faults.

  1. Robustness Analysis of a Timber Structure with Ductile Behaviour in Compression

    DEFF Research Database (Denmark)

    Čizmar, Dean; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning

    2011-01-01

    This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness assessment. The complex timber structure with a large number of failure modes...... material ductility of timber is taken into account. The robustness is expressed and evaluated by a robustness index....

  2. From coseismic offsets to fault-block mountains

    Science.gov (United States)

    Thompson, George A.; Parsons, Tom

    2017-09-01

    In the Basin and Range extensional province of the western United States, coseismic offsets, under the influence of gravity, display predominantly subsidence of the basin side (fault hanging wall), with comparatively little or no uplift of the mountainside (fault footwall). A few decades later, geodetic measurements [GPS and interferometric synthetic aperture radar (InSAR)] show broad (˜100 km) aseismic uplift symmetrically spanning the fault zone. Finally, after millions of years and hundreds of fault offsets, the mountain blocks display large uplift and tilting over a breadth of only about 10 km. These sparse but robust observations pose a problem in that the coesismic uplifts of the footwall are small and inadequate to raise the mountain blocks. To address this paradox we develop finite-element models subjected to extensional and gravitational forces to study time-varying deformation associated with normal faulting. Stretching the model under gravity demonstrates that asymmetric slip via collapse of the hanging wall is a natural consequence of coseismic deformation. Focused flow in the upper mantle imposed by deformation of the lower crust localizes uplift, which is predicted to take place within one to two decades after each large earthquake. Thus, the best-preserved topographic signature of earthquakes is expected to occur early in the postseismic period.

  3. Stabilization of Continuous-Time Random Switching Systems via a Fault-Tolerant Controller

    Directory of Open Access Journals (Sweden)

    Guoliang Wang

    2017-01-01

    Full Text Available This paper focuses on the stabilization problem of continuous-time random switching systems via exploiting a fault-tolerant controller, where the dwell time of each subsystem consists of a fixed part and random part. It is known from the traditional design methods that the computational complexity of LMIs related to the quantity of fault combination is very large; particularly system dimension or amount of subsystems is large. In order to reduce the number of the used fault combinations, new sufficient LMI conditions for designing such a controller are established by a robust approach, which are fault-free and could be solved directly. Moreover, the fault-tolerant stabilization realized by a mode-independent controller is considered and suitably applied to a practical case without mode information. Finally, a numerical example is used to demonstrate the effectiveness and superiority of the proposed methods.

  4. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  5. Fault Detection, Isolation, and Accommodation for LTI Systems Based on GIMC Structure

    Directory of Open Access Journals (Sweden)

    D. U. Campos-Delgado

    2008-01-01

    Full Text Available In this contribution, an active fault-tolerant scheme that achieves fault detection, isolation, and accommodation is developed for LTI systems. Faults and perturbations are considered as additive signals that modify the state or output equations. The accommodation scheme is based on the generalized internal model control architecture recently proposed for fault-tolerant control. In order to improve the performance after a fault, the compensation is considered in two steps according with a fault detection and isolation algorithm. After a fault scenario is detected, a general fault compensator is activated. Finally, once the fault is isolated, a specific compensator is introduced. In this setup, multiple faults could be treated simultaneously since their effect is additive. Design strategies for a nominal condition and under model uncertainty are presented in the paper. In addition, performance indices are also introduced to evaluate the resulting fault-tolerant scheme for detection, isolation, and accommodation. Hard thresholds are suggested for detection and isolation purposes, meanwhile, adaptive ones are considered under model uncertainty to reduce the conservativeness. A complete simulation evaluation is carried out for a DC motor setup.

  6. Fault Management Design Strategies

    Science.gov (United States)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  7. SU-C-210-05: Evaluation of Robustness: Dosimetric Effects of Anatomical Changes During Fractionated Radiation Treatment of Pancreatic Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Horst, A van der; Houweling, A C; Bijveld, M M C; Visser, J; Bel, A [Academic Medical Center, Amsterdam, Noord-Holland (Netherlands)

    2015-06-15

    Purpose: Pancreatic tumors show large interfractional position variations. In addition, changes in gastrointestinal air volume and body contour take place during treatment. We aim to investigate the robustness of the clinical treatment plans by quantifying the dosimetric effects of these anatomical changes. Methods: Calculations were performed for up to now 3 pancreatic cancer patients who had intratumoral fiducials for daily CBCT-based positioning during their 3-week treatment. For each patient, deformable image registration of the planning CT was used to assign Hounsfield Units to each of the 13—15 CBCTs; air volumes and body contour were copied from CBCT. The clinical treatment plan was used (CTV-PTV margin = 10 mm; 36Gy; 10MV; 1 arc VMAT). Fraction dose distributions were calculated and accumulated. The V95% of the clinical target volume (CTV) and planning target volume (PTV) were analyzed, as well as the dose to stomach, duodenum and liver. Dose accumulation was done for patient positioning based on the fiducials (as clinically used) as well as for positioning based on bony anatomy. Results: For all three patients, the V95% of the CTV remained 100%, for both fiducial- and bony anatomy-based positioning. For fiducial-based positioning, dose to duodenum en stomach showed no discernable differences with planned dose. For bony anatomy-based positioning, the PTV V95% of the patient with the largest systematic difference in tumor position (patient 1) decreased to 85%; the liver Dmax increased from 33.5Gy (planned) to 35.5Gy. Conclusion: When using intratumoral fiducials, CTV dose coverage was only mildly affected by the daily anatomical changes. When using bony anatomy for patient positioning, we found a decline in PTV dose coverage due to the interfractional tumor position variations. Photon irradiation treatment plans for pancreatic tumors are robust to variations in body contour and gastrointestinal gas, but the use of fiducial-based daily position verification

  8. Second-order sliding mode control for DFIG-based wind turbines fault ride-through capability enhancement.

    Science.gov (United States)

    Benbouzid, Mohamed; Beltran, Brice; Amirat, Yassine; Yao, Gang; Han, Jingang; Mangel, Hervé

    2014-05-01

    This paper deals with the fault ride-through capability assessment of a doubly fed induction generator-based wind turbine using a high-order sliding mode control. Indeed, it has been recently suggested that sliding mode control is a solution of choice to the fault ride-through problem. In this context, this paper proposes a second-order sliding mode as an improved solution that handle the classical sliding mode chattering problem. Indeed, the main and attractive features of high-order sliding modes are robustness against external disturbances, the grids faults in particular, and chattering-free behavior (no extra mechanical stress on the wind turbine drive train). Simulations using the NREL FAST code on a 1.5-MW wind turbine are carried out to evaluate ride-through performance of the proposed high-order sliding mode control strategy in case of grid frequency variations and unbalanced voltage sags. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Robust System Identification and Control Design

    National Research Council Canada - National Science Library

    Zhou, Kemin

    2001-01-01

    ..., some advanced nonlinear control techniques including bifurcation stabilization and compressor stabilization techniques, model reduction techniques, fault detection and fault tolerant control methods...

  10. A robust classic.

    Science.gov (United States)

    Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus

    2011-01-01

    In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.

  11. Robust Airline Schedules

    OpenAIRE

    Eggenberg, Niklaus; Salani, Matteo; Bierlaire, Michel

    2010-01-01

    Due to economic pressure industries, when planning, tend to focus on optimizing the expected profit or the yield. The consequence of highly optimized solutions is an increased sensitivity to uncertainty. This generates additional "operational" costs, incurred by possible modifications of the original plan to be performed when reality does not reflect what was expected in the planning phase. The modern research trend focuses on "robustness" of solutions instead of yield or profit. Although ro...

  12. Summary and conclusions of the faults-in-clay project

    International Nuclear Information System (INIS)

    Hallam, J.R.; Brightman, M.A.; Jackson, P.D.; Sen, M.A.

    1992-01-01

    This report summarises a research project carried out by the British Geological Survey, in cooperation with ISMES of Italy, into the geophysical detection of faults in clay formations and the determination of the hydrogeological effects of such faults on the groundwater flow regime. Following evaluation of potential research sites, an extensive programme of investigations was conducted at Down Ampney, Gloucester, where the Oxford Clay formation is underlain by the aquifers of the Great Oolite Limestone group. A previously unknown fault of 50 m throw was identified and delineated by electrical resistivity profiling; the subsequent development of a technique utilising measurements of total resistance improved the resolution of the fault 'location' to an accuracy of better than one metre. Marked anisotropy of the clay resistivities complicates conventional geophysical interpretation, but gives rise to a characteristic anomaly across the steeply inclined strata in the fault zone. After exploratory core drilling, an array of 13 boreholes was designed and completed for cross-hole seismic tomography and hydrogeological measurement and testing. The groundwater heads in the clays were found to be in disequilibrium with those in the aquifers, as a result of water supply abstraction. The indication is that the hydraulic conductivity of the fault zone is higher than that of the surrounding clay by between one and two orders of magnitude. Methodologies for the general investigation of faults in clay are discussed. (Author)

  13. Advanced features of the fault tree solver FTREX

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jae Joo

    2005-01-01

    This paper presents advanced features of a fault tree solver FTREX (Fault Tree Reliability Evaluation eXpert). Fault tree analysis is one of the most commonly used methods for the safety analysis of industrial systems especially for the probabilistic safety analysis (PSA) of nuclear power plants. Fault trees are solved by the classical Boolean algebra, conventional Binary Decision Diagram (BDD) algorithm, coherent BDD algorithm, and Bayesian networks. FTREX could optionally solve fault trees by the conventional BDD algorithm or the coherent BDD algorithm and could convert the fault trees into the form of the Bayesian networks. The algorithm based on the classical Boolean algebra solves a fault tree and generates MCSs. The conventional BDD algorithm generates a BDD structure of the top event and calculates the exact top event probability. The BDD structure is a factorized form of the prime implicants. The MCSs of the top event could be extracted by reducing the prime implicants in the BDD structure. The coherent BDD algorithm is developed to overcome the shortcomings of the conventional BDD algorithm such as the huge memory requirements and a long run time

  14. Accelerometer having integral fault null

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-08-01

    An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

  15. The Crane Robust Control

    Directory of Open Access Journals (Sweden)

    Marek Hicar

    2004-01-01

    Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.

  16. Line-to-Line Fault Analysis and Location in a VSC-Based Low-Voltage DC Distribution Network

    Directory of Open Access Journals (Sweden)

    Shi-Min Xue

    2018-03-01

    Full Text Available A DC cable short-circuit fault is the most severe fault type that occurs in DC distribution networks, having a negative impact on transmission equipment and the stability of system operation. When a short-circuit fault occurs in a DC distribution network based on a voltage source converter (VSC, an in-depth analysis and characterization of the fault is of great significance to establish relay protection, devise fault current limiters and realize fault location. However, research on short-circuit faults in VSC-based low-voltage DC (LVDC systems, which are greatly different from high-voltage DC (HVDC systems, is currently stagnant. The existing research in this area is not conclusive, with further study required to explain findings in HVDC systems that do not fit with simulated results or lack thorough theoretical analyses. In this paper, faults are divided into transient- and steady-state faults, and detailed formulas are provided. A more thorough and practical theoretical analysis with fewer errors can be used to develop protection schemes and short-circuit fault locations based on transient- and steady-state analytic formulas. Compared to the classical methods, the fault analyses in this paper provide more accurate computed results of fault current. Thus, the fault location method can rapidly evaluate the distance between the fault and converter. The analyses of error increase and an improved handshaking method coordinating with the proposed location method are presented.

  17. A Framework for Diagnosis of Critical Faults in Unmanned Aerial Vehicles

    DEFF Research Database (Denmark)

    Hansen, Søren; Blanke, Mogens; Adrian, Jens

    2014-01-01

    , and based on a large number of data logged during flights, diagnostic methods are employed to diagnose faults and the performance of these fault detectors are evaluated against light data. The paper demonstrates a significant potential for reducing the risk of unplanned loss of remotely piloted vehicles......Unmanned Aerial Vehicles (UAVs) need a large degree of tolerance towards faults. If not diagnosed and handled in time, many types of faults can have catastrophic consequences if they occur during flight. Prognosis of faults is also valuable and so is the ability to distinguish the severity...... of the different faults in terms of both consequences and the frequency with which they appear. In this paper flight data from a fleet of UAVs is analysed with respect to certain faults and their frequency of appearance. Data is taken from a group of UAV's of the same type but with small differences in weight...

  18. Fault isolatability conditions for linear systems

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Henrik

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...... the faults have occurred. The last step is a fault isolation (FI) of the faults occurring in a specific fault set, i.e. equivalent with the standard FI step. A simple example demonstrates how to turn the algebraic necessary and sufficient conditions into explicit algorithms for designing filter banks, which...

  19. ESR dating of the fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2005-01-01

    We carried out ESR dating of fault rocks collected near the nuclear reactor. The Upcheon fault zone is exposed close to the Ulzin nuclear reactor. The space-time pattern of fault activity on the Upcheon fault deduced from ESR dating of fault gouge can be summarised as follows : this fault zone was reactivated between fault breccia derived from Cretaceous sandstone and tertiary volcanic sedimentary rocks about 2 Ma, 1.5 Ma and 1 Ma ago. After those movements, the Upcheon fault was reactivated between Cretaceous sandstone and fault breccia zone about 800 ka ago. This fault zone was reactivated again between fault breccia derived form Cretaceous sandstone and Tertiary volcanic sedimentary rocks about 650 ka and after 125 ka ago. These data suggest that the long-term(200-500 k.y.) cyclic fault activity of the Upcheon fault zone continued into the Pleistocene. In the Ulzin area, ESR dates from the NW and EW trend faults range from 800 ka to 600 ka NE and EW trend faults were reactivated about between 200 ka and 300 ka ago. On the other hand, ESR date of the NS trend fault is about 400 ka and 50 ka. Results of this research suggest the fault activity near the Ulzin nuclear reactor fault activity continued into the Pleistocene. One ESR date near the Youngkwang nuclear reactor is 200 ka

  20. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  1. Arc fault detection system

    Science.gov (United States)

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  2. Arc fault detection system

    Science.gov (United States)

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  3. Probabilistic assessment of faults

    International Nuclear Information System (INIS)

    Foden, R.W.

    1987-01-01

    Probabilistic safety analysis (PSA) is the process by which the probability (or frequency of occurrence) of reactor fault conditions which could lead to unacceptable consequences is assessed. The basic objective of a PSA is to allow a judgement to be made as to whether or not the principal probabilistic requirement is satisfied. It also gives insights into the reliability of the plant which can be used to identify possible improvements. This is explained in the article. The scope of a PSA and the PSA performed by the National Nuclear Corporation (NNC) for the Heysham II and Torness AGRs and Sizewell-B PWR are discussed. The NNC methods for hazards, common cause failure and operator error are mentioned. (UK)

  4. Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro

    2015-01-01

    Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis

  5. Indirect adaptive fuzzy fault-tolerant tracking control for MIMO nonlinear systems with actuator and sensor failures.

    Science.gov (United States)

    Bounemeur, Abdelhamid; Chemachema, Mohamed; Essounbouli, Najib

    2018-05-10

    In this paper, an active fuzzy fault tolerant tracking control (AFFTTC) scheme is developed for a class of multi-input multi-output (MIMO) unknown nonlinear systems in the presence of unknown actuator faults, sensor failures and external disturbance. The developed control scheme deals with four kinds of faults for both sensors and actuators. The bias, drift, and loss of accuracy additive faults are considered along with the loss of effectiveness multiplicative fault. A fuzzy adaptive controller based on back-stepping design is developed to deal with actuator failures and unknown system dynamics. However, an additional robust control term is added to deal with sensor faults, approximation errors, and external disturbances. Lyapunov theory is used to prove the stability of the closed loop system. Numerical simulations on a quadrotor are presented to show the effectiveness of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Analysis of Fault Permeability Using Mapping and Flow Modeling, Hickory Sandstone Aquifer, Central Texas

    Energy Technology Data Exchange (ETDEWEB)

    Nieto Camargo, Jorge E., E-mail: jorge.nietocamargo@aramco.com; Jensen, Jerry L., E-mail: jjensen@ucalgary.ca [University of Calgary, Department of Chemical and Petroleum Engineering (Canada)

    2012-09-15

    Reservoir compartments, typical targets for infill well locations, are commonly created by faults that may reduce permeability. A narrow fault may consist of a complex assemblage of deformation elements that result in spatially variable and anisotropic permeabilities. We report on the permeability structure of a km-scale fault sampled through drilling a faulted siliciclastic aquifer in central Texas. Probe and whole-core permeabilities, serial CAT scans, and textural and structural data from the selected core samples are used to understand permeability structure of fault zones and develop predictive models of fault zone permeability. Using numerical flow simulation, it is possible to predict permeability anisotropy associated with faults and evaluate the effect of individual deformation elements in the overall permeability tensor. We found relationships between the permeability of the host rock and those of the highly deformed (HD) fault-elements according to the fault throw. The lateral continuity and predictable permeability of the HD fault elements enhance capability for estimating the effects of subseismic faulting on fluid flow in low-shale reservoirs.

  7. The Robust Control Mixer Module Method for Control Reconfiguration

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, M.

    1999-01-01

    into a LTI dynamical system, and furthermore multiple dynamical control mixer modules can be employed in our consideration. The H_{\\infty} control theory is used for the analysis and design of the robust control mixer modules. Finally, one practical robot arm system as benchmark is used to test the proposed......The control mixer concept is efficient in improving an ordinary control system into a fault tolerant one, especially for these control systems of which the real-time and on-line redesign of the control laws is very difficult. In order to consider the stability, performance and robustness...... of the reconfigurated system simultaneously, and to deal with a more general controller reconfiguration than the static feedback mechanism by using the control mixer approach, the robust control mixer module method is proposed in this paper. The form of the control mixer module extends from a static gain matrix...

  8. Fault tolerant architecture for artificial olfactory system

    International Nuclear Information System (INIS)

    Lotfivand, Nasser; Hamidon, Mohd Nizar; Abdolzadeh, Vida

    2015-01-01

    In this paper, to cover and mask the faults that occur in the sensing unit of an artificial olfactory system, a novel architecture is offered. The proposed architecture is able to tolerate failures in the sensors of the array and the faults that occur are masked. The proposed architecture for extracting the correct results from the output of the sensors can provide the quality of service for generated data from the sensor array. The results of various evaluations and analysis proved that the proposed architecture has acceptable performance in comparison with the classic form of the sensor array in gas identification. According to the results, achieving a high odor discrimination based on the suggested architecture is possible. (paper)

  9. Frequency Based Fault Detection in Wind Turbines

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2014-01-01

    In order to obtain lower cost of energy for wind turbines fault detection and accommodation is important. Expensive condition monitoring systems are often used to monitor the condition of rotating and vibrating system parts. One example is the gearbox in a wind turbine. This system is operated...... in parallel to the control system, using different computers and additional often expensive sensors. In this paper a simple filter based algorithm is proposed to detect changes in a resonance frequency in a system, exemplified with faults resulting in changes in the resonance frequency in the wind turbine...... gearbox. Only the generator speed measurement which is available in even simple wind turbine control systems is used as input. Consequently this proposed scheme does not need additional sensors and computers for monitoring the condition of the wind gearbox. The scheme is evaluated on a wide-spread wind...

  10. Assuring SS7 dependability: A robustness characterization of signaling network elements

    Science.gov (United States)

    Karmarkar, Vikram V.

    1994-04-01

    Current and evolving telecommunication services will rely on signaling network performance and reliability properties to build competitive call and connection control mechanisms under increasing demands on flexibility without compromising on quality. The dimensions of signaling dependability most often evaluated are the Rate of Call Loss and End-to-End Route Unavailability. A third dimension of dependability that captures the concern about large or catastrophic failures can be termed Network Robustness. This paper is concerned with the dependability aspects of the evolving Signaling System No. 7 (SS7) networks and attempts to strike a balance between the probabilistic and deterministic measures that must be evaluated to accomplish a risk-trend assessment to drive architecture decisions. Starting with high-level network dependability objectives and field experience with SS7 in the U.S., potential areas of growing stringency in network element (NE) dependability are identified to improve against current measures of SS7 network quality, as per-call signaling interactions increase. A sensitivity analysis is presented to highlight the impact due to imperfect coverage of duplex network component or element failures (i.e., correlated failures), to assist in the setting of requirements on NE robustness. A benefit analysis, covering several dimensions of dependability, is used to generate the domain of solutions available to the network architect in terms of network and network element fault tolerance that may be specified to meet the desired signaling quality goals.

  11. Absolute age determination of quaternary faults

    International Nuclear Information System (INIS)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik

    2000-03-01

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results

  12. Absolute age determination of quaternary faults

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik [Korea Basic Science Institute, Seoul (Korea, Republic of)] (and others)

    2000-03-15

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results.

  13. Factors for simultaneous rupture assessment of active fault. Part 1. Fault geometry and slip-distribution based on tectonic geomorphological and paleoseismological investigations

    International Nuclear Information System (INIS)

    Sasaki, Toshinori; Ueta, Keiichi

    2012-01-01

    It is important to evaluate the magnitude of an earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults is often decided on the basis of geometric distances except for the cases in which paleoseismic records of these faults are well known. We have been studying the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the techniques based on the paleoseismic record and the geometric distance. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along these faults in dense vegetation area. We have found several new outcrops in this area where the surface ruptures of the 1891 Nobi earthquake have not been known. At the several outcrops, humic layer whose age is from 14th century to 19th century by 14C age dating was deformed by the active fault. We conclude that the surface rupture of Nukumi fault in the 1891 Nobi earthquake is continuous to 12km southeast of Nukumi village. In other words, these findings indicate that there is 10-12km parallel overlap zone between the surface rupture of the southeastern end of Nukumi fault and the northwestern end of Neodani fault. (author)

  14. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  15. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi

    2018-02-12

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a statistical approach. Specifically, a simulation model that mimics the theoretical performances of the inspected PV system is designed. Residuals, which are the difference between the measured and estimated output data, are used as a fault indicator. Indeed, residuals are used as the input for the Multivariate CUmulative SUM (MCUSUM) algorithm to detect potential faults. We evaluated the proposed method by using data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  16. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  17. Physical Fault Injection and Monitoring Methods for Programmable Devices

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00510096; Ferencei, Jozef

    A method of detecting faults for evaluating the fault cross section of any field programmable gate array (FPGA) was developed and is described in the thesis. The incidence of single event effects in FPGAs was studied for different probe particles (proton, neutron, gamma) using this method. The existing accelerator infrastructure of the Nuclear Physics Institute in Rez was supplemented by more sensitive beam monitoring system to ensure that the tests are done under well defined beam conditions. The bit cross section of single event effects was measured for different types of configuration memories, clock signal phase and beam energies and intensities. The extended infrastructure served also for radiation testing of components which are planned to be used in the new Inner Tracking System (ITS) detector of the ALICE experiment and for selecting optimal fault mitigation techniques used for securing the design of the FPGA-based ITS readout unit against faults induced by ionizing radiation.

  18. Subaru FATS (fault tracking system)

    Science.gov (United States)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  19. A compendium of computer codes in fault tree analysis

    International Nuclear Information System (INIS)

    Lydell, B.

    1981-03-01

    In the past ten years principles and methods for a unified system reliability and safety analysis have been developed. Fault tree techniques serve as a central feature of unified system analysis, and there exists a specific discipline within system reliability concerned with the theoretical aspects of fault tree evaluation. Ever since the fault tree concept was established, computer codes have been developed for qualitative and quantitative analyses. In particular the presentation of the kinetic tree theory and the PREP-KITT code package has influenced the present use of fault trees and the development of new computer codes. This report is a compilation of some of the better known fault tree codes in use in system reliability. Numerous codes are available and new codes are continuously being developed. The report is designed to address the specific characteristics of each code listed. A review of the theoretical aspects of fault tree evaluation is presented in an introductory chapter, the purpose of which is to give a framework for the validity of the different codes. (Auth.)

  20. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    typical diurnal variations of low wind in the early morning and greatest winds in the late afternoon/early evening. Typically about ten tests were performed in each house. To answer the second question, different data analysis techniques were investigated that looked at averaging techniques, elimination of outliers, limiting leak pressures, etc. in order to minimize the influence of changing wind conditions during the test. The objective was to find a reasonable compromise between test precision and robustness--because many of the changes to the analysis to make the test more robust limit its ability to examine wide ranges of pressures and leakage flows. A secondary goal of this study is to show that DeltaQ uncertainties are acceptable for testing low leakage systems. Therefore houses with low duct leakage were deliberately chosen to be tested.