WorldWideScience

Sample records for fault management model

  1. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  2. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  3. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M; Haenninen, S [VTT Energy, Espoo (Finland); Seppaenen, M [North-Carelian Power Co (Finland); Antila, E; Markkila, E [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  4. Layered Fault Management Architecture

    National Research Council Canada - National Science Library

    Sztipanovits, Janos

    2004-01-01

    ... UAVs or Organic Air Vehicles. The approach of this effort was to analyze fault management requirements of formation flight for fleets of UAVs, and develop a layered fault management architecture which demonstrates significant...

  5. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    Science.gov (United States)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  6. Managing Space System Faults: Coalescing NASA's Views

    Science.gov (United States)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  7. Model-Based Fault Management Engineering Tool Suite, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's successful development of next generation space vehicles, habitats, and robotic systems will rely on effective Fault Management Engineering. Our proposed...

  8. Fault Management Assistant (FMA), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — S&K Aerospace (SKA) proposes to develop the Fault Management Assistant (FMA) to aid project managers and fault management engineers in developing better and more...

  9. An Overview of Optical Network Bandwidth and Fault Management

    Directory of Open Access Journals (Sweden)

    J.A. Zubairi

    2010-09-01

    Full Text Available This paper discusses the optical network management issues and identifies potential areas for focused research. A general outline of the main components in optical network management is given and specific problems in GMPLS based model are explained. Later, protection and restoration issues are discussed in the broader context of fault management and the tools developed for fault detection are listed. Optical networks need efficient and reliable protection schemes that restore the communications quickly on the occurrence of faults without causing failure of real-time applications using the network. A holistic approach is required that provides mechanisms for fault detection, rapid restoration and reversion in case of fault resolution. Since the role of SDH/SONET is diminishing, the modern optical networks are poised towards the IP-centric model where high performance IP-MPLS routers manage a core intelligent network of IP over WDM. Fault management schemes are developed for both the IP layer and the WDM layer. Faults can be detected and repaired locally and also through centralized network controller. A hybrid approach works best in detecting the faults where the domain controller verifies the established LSPs in addition to the link tests at the node level. On detecting a fault, rapid restoration can perform localized routing of traffic away from the affected port and link. The traffic may be directed to pre-assigned backup paths that are established as shared or dedicated resources. We examine the protection issues in detail including the choice of layer for protection, implementing protection or restoration, backup path routing, backup resource efficiency, subpath protection, QoS traffic survival and multilayer protection triggers and alarm propagation. The complete protection cycle is described and mechanisms incorporated into RSVP-TE and other protocols for detecting and recording path errors are outlined. In addition, MPLS testbed

  10. Model-Based Off-Nominal State Isolation and Detection System for Autonomous Fault Management, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed model-based Fault Management system addresses the need for cost-effective solutions that enable higher levels of onboard spacecraft autonomy to reliably...

  11. Model-Based Off-Nominal State Isolation and Detection System for Autonomous Fault Management, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed model-based Fault Management system addresses the need for cost-effective solutions that enable higher levels of onboard spacecraft autonomy to reliably...

  12. NASA Spacecraft Fault Management Workshop Results

    Science.gov (United States)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

  13. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    Science.gov (United States)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  14. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    Science.gov (United States)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  15. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    Science.gov (United States)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  16. Fault management and systems knowledge

    Science.gov (United States)

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  17. Decentralized Fault Management for Service Dependability in Ubiquitous Networks

    DEFF Research Database (Denmark)

    Grønbæk, Lars Jesper

    2010-01-01

    ) unobservable and incomplete network state information, ii) unreliable observations, and iii) dynamic environments calling for adaptation in the fault management process. In the study, focus is on potential gains in the interaction between the components of Observation, Diagnosis, Decision and Remediation...... insights on the impact of unavoidable diagnosis imperfections on service reliability. Also, it is studied to what extent good remediation decisions may be applied to mitigate such imperfections. For this purpose a light-weight decision policy evaluation model is proposed and verified in a system level...... simulation model. Some of the main findings are: i) certain imperfection trade-off settings of the Diagnosis component lead to worse end-user service reliability than if no fault management is conducted, ii) using end-user service state information in the decision process can help improve service reliability...

  18. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  19. Fault Management Design Strategies

    Science.gov (United States)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  20. Orion GN&C Fault Management System Verification: Scope And Methodology

    Science.gov (United States)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  1. Fault Management of a Cold Dielectric HTS Power Transmission Cable

    International Nuclear Information System (INIS)

    Maguire, J; Allais, A; Yuan, J; Schmidt, F; Hamber, F; Welsh, Tom

    2006-01-01

    High temperature superconductor (HTS) power transmission cables offer significant advantages in power density over conventional copper-based cables. As with conventional cables, HTS cables must be safe and reliable when abnormal conditions, such as local and through faults, occur in the power grid. Due to the unique characteristics of HTS power cables, the fault management of an HTS cable is different from that of a conventional cable. Issues, such as nitrogen bubble formation within lapped dielectric material, need to be addressed. This paper reviews the efforts that have been performed to study the fault conditions of a cold dielectric HTS power cable. As a result of the efforts, a fault management scheme has been developed, which provides both local and through faults system protection. Details of the fault management scheme with examples are presented

  2. Fault Management: Degradation Signature Detection, Modeling, and Processing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault to Failure Progression (FFP) signature modeling and processing is a new method for applying condition-based signal data to detect degradation, to identify...

  3. A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    Science.gov (United States)

    Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

  4. Architecture Framework for Fault Management Assessment and Design (AFFMAD), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Architecture Framework for Fault Management Assessment And Design(AFFMAD) provides Fault Management (FM) trade space exploration and rigorous performance constraint...

  5. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  6. Dynamic modeling of gearbox faults: A review

    Science.gov (United States)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  7. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  8. FAULT TOLERANCE IN JOB SCHEDULING THROUGH FAULT MANAGEMENT FRAMEWORK USING SOA IN GRID

    Directory of Open Access Journals (Sweden)

    V. Indhumathi

    2017-01-01

    Full Text Available The rapid development in computing resources has enhanced the recital of computers and abridged their costs. This accessibility of low cost prevailing computers joined with the fame of the Internet and high-speed networks has leaded the computing surroundings to be mapped from dispersed to grid environments. Grid is a kind of dispersed system which supports the allotment and harmonized exploit of geographically dispersed and multi-owner resources, autonomously from their physical form and site, in vibrant practical organizations that carve up the similar objective of decipher large-scale applications. Thus any type of failure can happen at any point of time and job running in grid environment might fail. Therefore fault tolerance is an imperative and demanding concern in grid computing as the steadiness of individual grid resources may not be guaranteed. In order to build computational grids more effectual and consistent fault tolerant system is required. In order to accomplish the user prospect in terms of recital and competence, the Grid system desires SOA Fault Management Framework for the sharing of tasks with fault tolerance. A Fault Management Framework endeavor to pick up the response time of user’s proposed applications by ensures maximal exploitation of obtainable resources. The main aim is to avert, if probable, the stipulation where some processors are congested by means of a set of tasks while others are flippantly loaded or even at leisure.

  9. FSN-based fault modelling for fault detection and troubleshooting in CANDU stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, E., E-mail: elnara.nasimi@brucepower.com [Bruce Power LLP., Tiverton, Ontario(Canada); Gabbar, H.A. [Univ. of Ontario Inst. of Tech., Oshawa, Ontario (Canada)

    2013-07-01

    An accurate fault modeling and troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities of current and future generation of CANDU designs. This paper presents fault modeling approach using Fault Semantic Network (FSN) methodology with risk estimation. Its application is demonstrated using a case study of Bruce B zone-control level oscillations. (author)

  10. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  11. Analytical Approaches to Guide SLS Fault Management (FM) Development

    Science.gov (United States)

    Patterson, Jonathan D.

    2012-01-01

    Extensive analysis is needed to determine the right set of FM capabilities to provide the most coverage without significantly increasing the cost, reliability (FP/FN), and complexity of the overall vehicle systems. Strong collaboration with the stakeholders is required to support the determination of the best triggers and response options. The SLS Fault Management process has been documented in the Space Launch System Program (SLSP) Fault Management Plan (SLS-PLAN-085).

  12. IP, ethernet and MPLS networks resource and fault management

    CERN Document Server

    Perez, André

    2013-01-01

    This book summarizes the key Quality of Service technologies deployed in telecommunications networks: Ethernet, IP, and MPLS. The QoS of the network is made up of two parts: fault and resource management. Network operation quality is among the functions to be fulfilled in order to offer QoS to the end user. It is characterized by four parameters: packet loss, delay, jitter or the variation of delay over time, and availability. Resource management employs mechanisms that enable the first three parameters to be guaranteed or optimized. Fault management aims to ensure continuity of service.

  13. A Method to Quantify Plant Availability and Initiating Event Frequency Using a Large Event Tree, Small Fault Tree Model

    International Nuclear Information System (INIS)

    Kee, Ernest J.; Sun, Alice; Rodgers, Shawn; Popova, ElmiraV; Nelson, Paul; Moiseytseva, Vera; Wang, Eric

    2006-01-01

    South Texas Project uses a large fault tree to produce scenarios (minimal cut sets) used in quantification of plant availability and event frequency predictions. On the other hand, the South Texas Project probabilistic risk assessment model uses a large event tree, small fault tree for quantifying core damage and radioactive release frequency predictions. The South Texas Project is converting its availability and event frequency model to use a large event tree, small fault in an effort to streamline application support and to provide additional detail in results. The availability and event frequency model as well as the applications it supports (maintenance and operational risk management, system engineering health assessment, preventive maintenance optimization, and RIAM) are briefly described. A methodology to perform availability modeling in a large event tree, small fault tree framework is described in detail. How the methodology can be used to support South Texas Project maintenance and operations risk management is described in detail. Differences with other fault tree methods and other recently proposed methods are discussed in detail. While the methods described are novel to the South Texas Project Risk Management program and to large event tree, small fault tree models, concepts in the area of application support and availability modeling have wider applicability to the industry. (authors)

  14. A Real-Time Fault Management Software System for Distributed Environments, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — DyMA-FM (Dynamic Multivariate Assessment for Fault Management) is a software architecture for real-time fault management. Designed to run in a distributed...

  15. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  16. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  17. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  18. Stator Fault Modelling of Induction Motors

    DEFF Research Database (Denmark)

    Thomsen, Jesper Sandberg; Kallesøe, Carsten

    2006-01-01

    In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real measurem......In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real...... measurements from a specially designed induction motor. With this motor it is possible to simulate both terminal disconnections, inter-turn and turn-turn short circuits. The results show good agreement between the measurements and the simulated signals obtained from the model. In the tests focus...

  19. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    Science.gov (United States)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  20. Integrated 3D Reservoir/Fault Property Modelling Aided Well Planning and Improved Hydrocarbon Recovery in a Niger Delta Field

    International Nuclear Information System (INIS)

    Onyeagoro, U. O.; Ebong, U. E.; Nworie, E. A.

    2002-01-01

    The large and varied portfolio of assets managed by oil companies requires quick decision-making and the deployment of best in class technologies in asset management. Timely decision making and the application of the best technologies in reservoir management are however sometimes in conflict due to large time requirements of the latter.Optimizing the location of development wells is critical to account for variable fluid contact movements and pressure interference effects between wells, which can be significant because of the high permeability (Darcy range) of Niger Delta reservoirs. With relatively high drilling costs, the optimization of well locations necessitates a good realistic static and dynamic 3D reservoir description, especially in the recovery of remaining oil and oil rim type of reservoirs.A detailed 3D reservoir model with fault properties was constructed for a Niger delta producing field. This involved the integration of high quality 3D seismic, core, petrophysics, reservoir engineering, production and structural geology data to construct a realistic 3D reservoir/fault property model for the field. The key parameters considered during the construction of the internal architecture of the model were the vertical and horizontal reservoir heterogeneities-this controls the fluid flow within the reservoir. In the production realm, the fault thickness and fault permeabilities are factors that control the impedance of fluid flow across the fault-fault transmissibility. These key internal and external reservoir/structural variables were explicitly modeled in a 3D modeling software to produce different realizations and manage the uncertainties.The resulting 3D reservoir/fault property model was upscaled for simulation purpose such that grid blocks along the fault planes have realistic transmissibility multipliers of 0 to 1 attached to them. The model was also used in the well planner to optimize the positioning of a high angle deviated well that penetrated

  1. Automated fault-management in a simulated spaceflight micro-world

    Science.gov (United States)

    Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja

    2002-01-01

    BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.

  2. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.; Haves, Philip; Sohn, Michael D.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models are imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.

  3. V&V of Fault Management: Challenges and Successes

    Science.gov (United States)

    Fesq, Lorraine M.; Costello, Ken; Ohi, Don; Lu, Tiffany; Newhouse, Marilyn

    2013-01-01

    This paper describes the results of a special breakout session of the NASA Independent Verification and Validation (IV&V) Workshop held in the fall of 2012 entitled "V&V of Fault Management: Challenges and Successes." The NASA IV&V Program is in a unique position to interact with projects across all of the NASA development domains. Using this unique opportunity, the IV&V program convened a breakout session to enable IV&V teams to share their challenges and successes with respect to the V&V of Fault Management (FM) architectures and software. The presentations and discussions provided practical examples of pitfalls encountered while performing V&V of FM including the lack of consistent designs for implementing faults monitors and the fact that FM information is not centralized but scattered among many diverse project artifacts. The discussions also solidified the need for an early commitment to developing FM in parallel with the spacecraft systems as well as clearly defining FM terminology within a project.

  4. SDEM modelling of fault-propagation folding

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Poulsen, Jane Bang

    2009-01-01

    and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset......Understanding the dynamics and kinematics of fault-propagation-folding is important for evaluating the associated hydrocarbon play, for accomplishing reliable section balancing (structural reconstruction), and for assessing seismic hazards. Accordingly, the deformation style of fault-propagation...... a precise indication of when faults develop and hence also the sequential evolution of secondary faults. Here we focus on the generation of a fault -propagated fold with a reverse sense of motion at the master fault, and varying only the dip of the master fault and the mechanical behaviour of the deformed...

  5. Modeling and Fault Simulation of Propellant Filling System

    International Nuclear Information System (INIS)

    Jiang Yunchun; Liu Weidong; Hou Xiaobo

    2012-01-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  6. Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.

    Science.gov (United States)

    Wang, Tao; Xie, Wenfang; Zhang, Youmin

    2012-05-01

    In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions

    Science.gov (United States)

    Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.

    2010-01-01

    Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the

  8. Fault Management Techniques in Human Spaceflight Operations

    Science.gov (United States)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  9. Software Tools for Fault Management Technologies, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault Management (FM) is a key requirement for safety, efficient onboard and ground operations, maintenance, and repair. QSI's TEAMS Software suite is a leading...

  10. Bond graph model-based fault diagnosis of hybrid systems

    CERN Document Server

    Borutzky, Wolfgang

    2015-01-01

    This book presents a bond graph model-based approach to fault diagnosis in mechatronic systems appropriately represented by a hybrid model. The book begins by giving a survey of the fundamentals of fault diagnosis and failure prognosis, then recalls state-of-art developments referring to latest publications, and goes on to discuss various bond graph representations of hybrid system models, equations formulation for switched systems, and simulation of their dynamic behavior. The structured text: • focuses on bond graph model-based fault detection and isolation in hybrid systems; • addresses isolation of multiple parametric faults in hybrid systems; • considers system mode identification; • provides a number of elaborated case studies that consider fault scenarios for switched power electronic systems commonly used in a variety of applications; and • indicates that bond graph modelling can also be used for failure prognosis. In order to facilitate the understanding of fault diagnosis and the presented...

  11. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  12. Structural Health and Prognostics Management for Offshore Wind Turbines: Sensitivity Analysis of Rotor Fault and Blade Damage with O&M Cost Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Myrent, Noah J. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Barrett, Natalie C. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Adams, Douglas E. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Griffith, Daniel Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Wind Energy Technology Dept.

    2014-07-01

    Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by the presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine availability

  13. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  14. Fault condition stress analysis of NET 16 TF coil model

    International Nuclear Information System (INIS)

    Jong, C.T.J.

    1992-04-01

    As part of the design process of the NET/ITER toroidal field coils (TFCs), the mechanical behaviour of the magnetic system under fault conditions has to be analysed in some detail. Under fault conditions, either electrical or mechanical, the magnetic loading of the coils becomes extreme and further mechanical failure of parts of the overall structure might occur (e.g. failure of the coil, gravitational support, intercoil structure). The mechanical behaviour of the magnetic system under fault conditions has been analysed with a finite element model of the complete TFC system. The analysed fault conditions consist of: a thermal fault, electrical faults and mechanical faults. The mechanical faults have been applied simultaneously with an electrical fault. This report described the work carried out to create the finite element model of 16 TFCs and contains an extensive presentation of the results, obtained with this model, of a normal operating condition analysis and 9 fault condition analyses. Chapter 2-5 contains a detailed description of the finite element model, boundary conditions and loading conditions of the analyses made. Chapters 2-4 can be skipped if the reader is only interested in results. To understand the results presented chapter 6 is recommended, which contains a detailed description of all analysed fault conditions. The dimensions and geometry of the model correspond to the status of the NET/ITER TFC design of May 1990. Compared with previous models of the complete magnetic system, the finite element model of 16 TFCs is 'detailed', and can be used for linear elastic analysis with faulted loads. (author). 8 refs.; 204 figs.; 134 tabs

  15. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    Science.gov (United States)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  16. A study on quantification of unavailability of DPPS with fault tolerant techniques considering fault tolerant techniques' characteristics

    International Nuclear Information System (INIS)

    Kim, B. G.; Kang, H. G.; Kim, H. E.; Seung, P. H.; Kang, H. G.; Lee, S. J.

    2012-01-01

    With the improvement of digital technologies, digital I and C systems have included more various fault tolerant techniques than conventional analog I and C systems have, in order to increase fault detection and to help the system safely perform the required functions in spite of the presence of faults. So, in the reliability evaluation of digital systems, the fault tolerant techniques (FTTs) and their fault coverage must be considered. To consider the effects of FTTs in a digital system, there have been several studies on the reliability of digital model. Therefore, this research based on literature survey attempts to develop a model to evaluate the plant reliability of the digital plant protection system (DPPS) with fault tolerant techniques considering detection and process characteristics and human errors. Sensitivity analysis is performed to ascertain important variables from the fault management coverage and unavailability based on the proposed model

  17. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  18. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    Data.gov (United States)

    National Aeronautics and Space Administration — Sensor faults continue to be a major hurdle for sys- tems health management to reach its full potential. At the same time, few recorded instances of sensor faults...

  19. A 3D modeling approach to complex faults with multi-source data

    Science.gov (United States)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  20. Fault Diagnosis of Nonlinear Systems Using Structured Augmented State Models

    Institute of Scientific and Technical Information of China (English)

    Jochen Aβfalg; Frank Allg(o)wer

    2007-01-01

    This paper presents an internal model approach for modeling and diagnostic functionality design for nonlinear systems operating subject to single- and multiple-faults. We therefore provide the framework of structured augmented state models. Fault characteristics are considered to be generated by dynamical exosystems that are switched via equality constraints to overcome the augmented state observability limiting the number of diagnosable faults. Based on the proposed model, the fault diagnosis problem is specified as an optimal hybrid augmented state estimation problem. Sub-optimal solutions are motivated and exemplified for the fault diagnosis of the well-known three-tank benchmark. As the considered class of fault diagnosis problems is large, the suggested approach is not only of theoretical interest but also of high practical relevance.

  1. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  2. Stand-Alone Photovoltaic System Operation with Energy Management and Fault Tolerant

    International Nuclear Information System (INIS)

    Jmashidpour, Ehsan; Poure, Philippe; Gholipour, E.; Saadate, Shahrokh

    2017-01-01

    This paper presents a stand-alone photovoltaic (PV) system with a fault tolerant operation capability. An energy management method is provided to keep the balance between produced and consumed energy instantaneously. As the storage element, an Ultra-Capacitor (UC) pack is used for facing high frequency variation of the load/source, and batteries are in charge of slow load /source variations. A Maximum Power Point Tracking (MPPT) algorithm is applied to control the boost converter of the PV source to achieve the maximum power. In order to improve the micro-grid service continuity and reliability, a fast fault diagnosis method based on the converter current shape for PV source is applied. Finally, the validity of the proposed energy management and the fault diagnosis method is confirmed by the simulation and experimental results. (author)

  3. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  4. Nonlinear Model-Based Fault Detection for a Hydraulic Actuator

    NARCIS (Netherlands)

    Van Eykeren, L.; Chu, Q.P.

    2011-01-01

    This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost

  5. Operations management system advanced automation: Fault detection isolation and recovery prototyping

    Science.gov (United States)

    Hanson, Matt

    1990-01-01

    The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.

  6. Product quality management based on CNC machine fault prognostics and diagnosis

    Science.gov (United States)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  7. Mechanical Models of Fault-Related Folding

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  8. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  9. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...

  10. Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems

    Science.gov (United States)

    Ferrell, Bob; Oostdyk, Rebecca

    2010-01-01

    The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project

  11. Geometric analysis of alternative models of faulting at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Young, S.R.; Stirewalt, G.L.; Morris, A.P.

    1993-01-01

    Realistic cross section tectonic models must be retrodeformable to geologically reasonable pre-deformation states. Furthermore, it must be shown that geologic structures depicted on cross section tectonic models can have formed by kinematically viable deformation mechanisms. Simple shear (i.e., listric fault models) is consistent with extensional geologic structures and fault patterns described at Yucca Mountain, Nevada. Flexural slip models yield results similar to oblique simple shear mechanisms, although there is no strong geological evidence for flexural slip deformation. Slip-line deformation is shown to generate fault block geometrics that are a close approximation to observed fault block structures. However, slip-line deformation implies a degree of general ductility for which there is no direct geological evidence. Simple and hybrid 'domino' (i.e., planar fault) models do not adequately explain observed variations of fault block dip or the development of 'rollover' folds adjacent to major bounding faults. Overall tectonic extension may be underestimated because of syn-tectonic deposition (growth faulting) of the Tertiary pyroclastic rocks that comprise Yucca Mountain. A strong diagnostic test of the applicability of the domino model may be provided by improved knowledge of Tertiary volcanic stratigraphy

  12. Model-based fault diagnosis in PEM fuel cell systems

    Energy Technology Data Exchange (ETDEWEB)

    Escobet, T; de Lira, S; Puig, V; Quevedo, J [Automatic Control Department (ESAII), Universitat Politecnica de Catalunya (UPC), Rambla Sant Nebridi 10, 08222 Terrassa (Spain); Feroldi, D; Riera, J; Serra, M [Institut de Robotica i Informatica Industrial (IRI), Consejo Superior de Investigaciones Cientificas (CSIC), Universitat Politecnica de Catalunya (UPC) Parc Tecnologic de Barcelona, Edifici U, Carrer Llorens i Artigas, 4-6, Planta 2, 08028 Barcelona (Spain)

    2009-07-01

    In this work, a model-based fault diagnosis methodology for PEM fuel cell systems is presented. The methodology is based on computing residuals, indicators that are obtained comparing measured inputs and outputs with analytical relationships, which are obtained by system modelling. The innovation of this methodology is based on the characterization of the relative residual fault sensitivity. To illustrate the results, a non-linear fuel cell simulator proposed in the literature is used, with modifications, to include a set of fault scenarios proposed in this work. Finally, it is presented the diagnosis results corresponding to these fault scenarios. It is remarkable that with this methodology it is possible to diagnose and isolate all the faults in the proposed set in contrast with other well known methodologies which use the binary signature matrix of analytical residuals and faults. (author)

  13. Development of a fault test experimental facility model using Matlab

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez; Moraes, Davi Almeida, E-mail: martinez@ipen.br, E-mail: dmoraes@dk8.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The Fault Test Experimental Facility was developed to simulate a PWR nuclear power plant and is instrumented with temperature, level and pressure sensors. The Fault Test Experimental Facility can be operated to generate normal and fault data, and these failures can be added initially small, and their magnitude being increasing gradually. This work presents the Fault Test Experimental Facility model developed using the Matlab GUIDE (Graphical User Interface Development Environment) toolbox that consists of a set of functions designed to create interfaces in an easy and fast way. The system model is based on the mass and energy inventory balance equations. Physical as well as operational aspects are taken into consideration. The interface layout looks like a process flowchart and the user can set the input variables. Besides the normal operation conditions, there is the possibility to choose a faulty variable from a list. The program also allows the user to set the noise level for the input variables. Using the model, data were generated for different operational conditions, both under normal and fault conditions with different noise levels added to the input variables. Data generated by the model will be compared with Fault Test Experimental Facility data. The Fault Test Experimental Facility theoretical model results will be used for the development of a Monitoring and Fault Detection System. (author)

  14. Development of a fault test experimental facility model using Matlab

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Moraes, Davi Almeida

    2015-01-01

    The Fault Test Experimental Facility was developed to simulate a PWR nuclear power plant and is instrumented with temperature, level and pressure sensors. The Fault Test Experimental Facility can be operated to generate normal and fault data, and these failures can be added initially small, and their magnitude being increasing gradually. This work presents the Fault Test Experimental Facility model developed using the Matlab GUIDE (Graphical User Interface Development Environment) toolbox that consists of a set of functions designed to create interfaces in an easy and fast way. The system model is based on the mass and energy inventory balance equations. Physical as well as operational aspects are taken into consideration. The interface layout looks like a process flowchart and the user can set the input variables. Besides the normal operation conditions, there is the possibility to choose a faulty variable from a list. The program also allows the user to set the noise level for the input variables. Using the model, data were generated for different operational conditions, both under normal and fault conditions with different noise levels added to the input variables. Data generated by the model will be compared with Fault Test Experimental Facility data. The Fault Test Experimental Facility theoretical model results will be used for the development of a Monitoring and Fault Detection System. (author)

  15. Model-Based Methods for Fault Diagnosis: Some Guide-Lines

    DEFF Research Database (Denmark)

    Patton, R.J.; Chen, J.; Nielsen, S.B.

    1995-01-01

    This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties.......This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties....

  16. A Lateral Tensile Fracturing Model for Listric Fault

    Science.gov (United States)

    Qiu, Z.

    2007-12-01

    The new discovery of a major seismic fault of the great 1976 Tangshan earthquake suggests a lateral tensile fracturing process at the seismic source. The fault is in listric shape but can not be explained with the prevailing model of listric fault. A double-couple of forces without moment is demonstrated to be applicable to simulate the source mechanism. Based on fracture mechanics, laboratory experiments as well as numerical simulations, the model is against the assumption of stick-slip on existing fault as the cause of the earthquake but not in conflict with seismological observations. Global statistics of CMT solutions of great earthquakes raises significant support to the idea that lateral tensile fracturing might account for not only the Tangshan earthquake but also others.

  17. Fuzzy delay model based fault simulator for crosstalk delay fault test ...

    Indian Academy of Sciences (India)

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps.

  18. Modeling Fluid Flow in Faulted Basins

    Directory of Open Access Journals (Sweden)

    Faille I.

    2014-07-01

    Full Text Available This paper presents a basin simulator designed to better take faults into account, either as conduits or as barriers to fluid flow. It computes hydrocarbon generation, fluid flow and heat transfer on the 4D (space and time geometry obtained by 3D volume restoration. Contrary to classical basin simulators, this calculator does not require a structured mesh based on vertical pillars nor a multi-block structure associated to the fault network. The mesh follows the sediments during the evolution of the basin. It deforms continuously with respect to time to account for sedimentation, erosion, compaction and kinematic displacements. The simulation domain is structured in layers, in order to handle properly the corresponding heterogeneities and to follow the sedimentation processes (thickening of the layers. In each layer, the mesh is unstructured: it may include several types of cells such as tetrahedra, hexahedra, pyramid, prism, etc. However, a mesh composed mainly of hexahedra is preferred as they are well suited to the layered structure of the basin. Faults are handled as internal boundaries across which the mesh is non-matching. Different models are proposed for fault behavior such as impervious fault, flow across fault or conductive fault. The calculator is based on a cell centered Finite Volume discretisation, which ensures conservation of physical quantities (mass of fluid, heat at a discrete level and which accounts properly for heterogeneities. The numerical scheme handles the non matching meshes and guaranties appropriate connection of cells across faults. Results on a synthetic basin demonstrate the capabilities of this new simulator.

  19. Verification of Fault Tree Models with RBDGG Methodology

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    Currently, fault tree analysis is widely used in the field of probabilistic safety assessment (PSA) of nuclear power plants (NPPs). To guarantee the correctness of fault tree models, which are usually manually constructed by analysts, a review by other analysts is widely used for verifying constructed fault tree models. Recently, an extension of the reliability block diagram was developed, which is named as RBDGG (reliability block diagram with general gates). The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system and, therefore, the modeling of a system for a system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar to that of the development of the RGGG (Reliability Graph with General Gates) methodology. The difference between the RBDGG methodology and RGGG methodology is that the RBDGG methodology focuses on the block failures while the RGGG methodology focuses on the connection line failures. But, it is also known that an RGGG model can be converted to an RBDGG model and vice versa. In this paper, a new method for the verification of the constructed fault tree models using the RBDGG methodology is proposed and demonstrated

  20. LQCD workflow execution framework: Models, provenance and fault-tolerance

    International Nuclear Information System (INIS)

    Piccoli, Luciano; Simone, James N; Kowalkowlski, James B; Dubey, Abhishek

    2010-01-01

    Large computing clusters used for scientific processing suffer from systemic failures when operated over long continuous periods for executing workflows. Diagnosing job problems and faults leading to eventual failures in this complex environment is difficult, specifically when the success of an entire workflow might be affected by a single job failure. In this paper, we introduce a model-based, hierarchical, reliable execution framework that encompass workflow specification, data provenance, execution tracking and online monitoring of each workflow task, also referred to as participants. The sequence of participants is described in an abstract parameterized view, which is translated into a concrete data dependency based sequence of participants with defined arguments. As participants belonging to a workflow are mapped onto machines and executed, periodic and on-demand monitoring of vital health parameters on allocated nodes is enabled according to pre-specified rules. These rules specify conditions that must be true pre-execution, during execution and post-execution. Monitoring information for each participant is propagated upwards through the reflex and healing architecture, which consists of a hierarchical network of decentralized fault management entities, called reflex engines. They are instantiated as state machines or timed automatons that change state and initiate reflexive mitigation action(s) upon occurrence of certain faults. We describe how this cluster reliability framework is combined with the workflow execution framework using formal rules and actions specified within a structure of first order predicate logic that enables a dynamic management design that reduces manual administrative workload, and increases cluster-productivity.

  1. Study on seismic hazard assessment of large active fault systems. Evolution of fault systems and associated geomorphic structures: fault model test and field survey

    International Nuclear Information System (INIS)

    Ueta, Keichi; Inoue, Daiei; Miyakoshi, Katsuyoshi; Miyagawa, Kimio; Miura, Daisuke

    2003-01-01

    Sandbox experiments and field surveys were performed to investigate fault system evolution and fault-related deformation of ground surface, the Quaternary deposits and rocks. The summary of the results is shown below. 1) In the case of strike-slip faulting, the basic fault sequence runs from early en echelon faults and pressure ridges through linear trough. The fault systems associated with the 2000 western Tottori earthquake are shown as en echelon pattern that characterize the early stage of wrench tectonics, therefore no thoroughgoing surface faulting was found above the rupture as defined by the main shock and aftershocks. 2) Low-angle and high-angle reverse faults commonly migrate basinward with time, respectively. With increasing normal fault displacement in bedrock, normal fault develops within range after reverse fault has formed along range front. 3) Horizontal distance of surface rupture from the bedrock fault normalized by the height of the Quaternary deposits agrees well with those of model tests. 4) Upward-widening damage zone, where secondary fractures develop, forms in the handing wall side of high-angle reverse fault at the Kamioka mine. (author)

  2. Alternative model of thrust-fault propagation

    Science.gov (United States)

    Eisenstadt, Gloria; de Paor, Declan G.

    1987-07-01

    A widely accepted explanation for the geometry of thrust faults is that initial failures occur on deeply buried planes of weak rock and that thrust faults propagate toward the surface along a staircase trajectory. We propose an alternative model that applies Gretener's beam-failure mechanism to a multilayered sequence. Invoking compatibility conditions, which demand that a thrust propagate both upsection and downsection, we suggest that ramps form first, at shallow levels, and are subsequently connected by flat faults. This hypothesis also explains the formation of many minor structures associated with thrusts, such as backthrusts, wedge structures, pop-ups, and duplexes, and provides a unified conceptual framework in which to evaluate field observations.

  3. Modeling fault rupture hazard for the proposed repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Coppersmith, K.J.; Youngs, R.R.

    1992-01-01

    In this paper as part of the Electric Power Research Institute's High Level Waste program, the authors have developed a preliminary probabilistic model for assessing the hazard of fault rupture to the proposed high level waste repository at Yucca Mountain. The model is composed of two parts: the earthquake occurrence model that describes the three-dimensional geometry of earthquake sources and the earthquake recurrence characteristics for all sources in the site vicinity; and the rupture model that describes the probability of coseismic fault rupture of various lengths and amounts of displacement within the repository horizon 350 m below the surface. The latter uses empirical data from normal-faulting earthquakes to relate the rupture dimensions and fault displacement amounts to the magnitude of the earthquake. using a simulation procedure, we allow for earthquake occurrence on all of the earthquake sources in the site vicinity, model the location and displacement due to primary faults, and model the occurrence of secondary faulting in conjunction with primary faulting

  4. Comparing Two Different Approaches to the Modeling of the Common Cause Failures in Fault Trees

    International Nuclear Information System (INIS)

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2002-01-01

    The potential for common cause failures in systems that perform critical functions has been recognized as very important contributor to risk associated with operation of nuclear power plants. Consequentially, modeling of common cause failures (CCF) in fault trees has become one among the essential elements in any probabilistic safety assessment (PSA). Detailed and realistic representation of CCF potential in fault tree structure is sometimes very challenging task. This is especially so in the cases where a common cause group involves more than two components. During the last ten years the difficulties associated with this kind of modeling have been overcome to some degree by development of integral PSA tools with high capabilities. Some of them allow for the definition of CCF groups and their automated expanding in the process of Boolean resolution and generation of minimal cutsets. On the other hand, in PSA models developed and run by more traditional tools, CCF-potential had to be modeled in the fault trees explicitly. With explicit CCF modeling, fault trees can grow very large, especially in the cases when they involve CCF groups with 3 or more members, which can become an issue for the management of fault trees and basic events with traditional non-integral PSA models. For these reasons various simplifications had to be made. Speaking in terms of an overall PSA model, there are also some other issues that need to be considered, such as maintainability and accessibility of the model. In this paper a comparison is made between the two approaches to CCF modeling. Analysis is based on a full-scope Level 1 PSA model for internal initiating events that had originally been developed by a traditional PSA tool and later transferred to a new-generation PSA tool with automated CCF modeling capabilities. Related aspects and issues mentioned above are discussed in the paper. (author)

  5. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    Science.gov (United States)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  6. Fault Management Architectures and the Challenges of Providing Software Assurance

    Science.gov (United States)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the

  7. Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan

    Directory of Open Access Journals (Sweden)

    Yu-Wen Chang

    2017-01-01

    Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.

  8. Systematic evaluation of fault trees using real-time model checker UPPAAL

    International Nuclear Information System (INIS)

    Cha, Sungdeok; Son, Hanseong; Yoo, Junbeom; Jee, Eunkyung; Seong, Poong Hyun

    2003-01-01

    Fault tree analysis, the most widely used safety analysis technique in industry, is often applied manually. Although techniques such as cutset analysis or probabilistic analysis can be applied on the fault tree to derive further insights, they are inadequate in locating flaws when failure modes in fault tree nodes are incorrectly identified or when causal relationships among failure modes are inaccurately specified. In this paper, we demonstrate that model checking technique is a powerful tool that can formally validate the accuracy of fault trees. We used a real-time model checker UPPAAL because the system we used as the case study, nuclear power emergency shutdown software named Wolsong SDS2, has real-time requirements. By translating functional requirements written in SCR-style tabular notation into timed automata, two types of properties were verified: (1) if failure mode described in a fault tree node is consistent with the system's behavioral model; and (2) whether or not a fault tree node has been accurately decomposed. A group of domain engineers with detailed technical knowledge of Wolsong SDS2 and safety analysis techniques developed fault tree used in the case study. However, model checking technique detected subtle ambiguities present in the fault tree

  9. Determination of the relationship between major fault and zinc mineralization using fractal modeling in the Behabad fault zone, central Iran

    Science.gov (United States)

    Adib, Ahmad; Afzal, Peyman; Mirzaei Ilani, Shapour; Aliyari, Farhang

    2017-10-01

    The aim of this study is to determine a relationship between zinc mineralization and a major fault in the Behabad area, central Iran, using the Concentration-Distance to Major Fault (C-DMF), Area of Mineralized Zone-Distance to Major Fault (AMZ-DMF), and Concentration-Area (C-A) fractal models for Zn deposit/mine classification according to their distance from the Behabad fault. Application of the C-DMF and the AMZ-DMF models for Zn mineralization classification in the Behabad fault zone reveals that the main Zn deposits have a good correlation with the major fault in the area. The distance from the known zinc deposits/mines with Zn values higher than 29% and the area of the mineralized zone of more than 900 m2 to the major fault is lower than 1 km, which shows a positive correlation between Zn mineralization and the structural zone. As a result, the AMZ-DMF and C-DMF fractal models can be utilized for the delineation and the recognition of different mineralized zones in different types of magmatic and hydrothermal deposits.

  10. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  11. Guideliness for system modeling: fault tree [analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

  12. Guideliness for system modeling: fault tree [analysis

    International Nuclear Information System (INIS)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard

  13. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  14. Neotectonics of Asia: Thin-shell finite-element models with faults

    Science.gov (United States)

    Kong, Xianghong; Bird, Peter

    1994-01-01

    As India pushed into and beneath the south margin of Asia in Cenozoic time, it added a great volume of crust, which may have been (1) emplaced locally beneath Tibet, (2) distributed as regional crustal thickening of Asia, (3) converted to mantle eclogite by high-pressure metamorphism, or (4) extruded eastward to increase the area of Asia. The amount of eastward extrusion is especially controversial: plane-stress computer models of finite strain in a continuum lithosphere show minimal escape, while laboratory and theoretical plane-strain models of finite strain in a faulted lithosphere show escape as the dominant mode. We suggest computing the present (or neo)tectonics by use of the known fault network and available data on fault activity, geodesy, and stress to select the best model. We apply a new thin-shell method which can represent a faulted lithosphere of realistic rheology on a sphere, and provided predictions of present velocities, fault slip rates, and stresses for various trial rheologies and boundary conditions. To minimize artificial boundaries, the models include all of Asia east of 40 deg E and span 100 deg on the globe. The primary unknowns are the friction coefficient of faults within Asia and the amounts of shear traction applied to Asia in the Himalayan and oceanic subduction zones at its margins. Data on Quaternary fault activity prove to be most useful in rating the models. Best results are obtained with a very low fault friction of 0.085. This major heterogeneity shows that unfaulted continum models cannot be expected to give accurate simulations of the orogeny. But, even with such weak faults, only a fraction of the internal deformation is expressed as fault slip; this means that rigid microplate models cannot represent the kinematics either. A universal feature of the better models is that eastern China and southeast Asia flow rapidly eastward with respect to Siberia. The rate of escape is very sensitive to the level of shear traction in the

  15. High level organizing principles for display of systems fault information for commercial flight crews

    Science.gov (United States)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  16. A Structural Model Decomposition Framework for Systems Health Management

    Science.gov (United States)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  17. A structural model decomposition framework for systems health management

    Science.gov (United States)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  18. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  19. Source characterization and dynamic fault modeling of induced seismicity

    Science.gov (United States)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  20. Modeling of a Switched Reluctance Motor under Stator Winding Fault Condition

    DEFF Research Database (Denmark)

    Chen, Hao; Han, G.; Yan, Wei

    2016-01-01

    A new method for modeling stator winding fault with one shorted coil in a switched reluctance motor (SRM) is presented in this paper. The method is based on artificial neural network (ANN), incorporated with a simple analytical model in electromagnetic analysis to estimate the flux-linkage charac......A new method for modeling stator winding fault with one shorted coil in a switched reluctance motor (SRM) is presented in this paper. The method is based on artificial neural network (ANN), incorporated with a simple analytical model in electromagnetic analysis to estimate the flux......-linkage characteristics of SRM under the stator winding fault. The magnetic equivalent circuit method with ANN is applied to calculate the nonlinear flux-linkage characteristics under stator winding fault condition. A stator winding fault 12/8 SRM prototype system is developed to verify the effectiveness of the proposed...... method. The results for a stator winding fault with one shorted coil are obtained from the proposed method and from the experimental work on a developed prototype. It is shown that the simulation results are close to the test results....

  1. A way to synchronize models with seismic faults for earthquake forecasting

    DEFF Research Database (Denmark)

    González, Á.; Gómez, J.B.; Vázquez-Prada, M.

    2006-01-01

    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual....... Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault...... models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized...

  2. Fuzzy delay model based fault simulator for crosstalk delay fault test ...

    Indian Academy of Sciences (India)

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design .... To find the quality of non-robust tests, a fuzzy delay ..... Dubois D and Prade H 1989 Processing Fuzzy temporal knowledge. IEEE Transactions ...

  3. Natural Environment Modeling and Fault-Diagnosis for Automated Agricultural Vehicle

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2008-01-01

    This paper presents results for an automatic navigation system for agricultural vehicles. The system uses stereo-vision, inertial sensors and GPS. Special emphasis has been placed on modeling the natural environment in conjunction with a fault-tolerant navigation system. The results are exemplified...... by an agricultural vehicle following cut grass (swath). It is demonstrated how faults in the system can be detected and diagnosed using state of the art techniques from fault-tolerant literature. Results in performing fault-diagnosis and fault accomodation are presented using real data....

  4. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  5. Insights in Fault Flow Behaviour from Onshore Nigeria Petroleum System Modelling

    Directory of Open Access Journals (Sweden)

    Woillez Marie-Noëlle

    2017-09-01

    Full Text Available Faults are complex geological features acting either as permeability barrier, baffle or drain to fluid flow in sedimentary basins. Their role can be crucial for over-pressure building and hydrocarbon migration, therefore they have to be properly integrated in basin modelling. The ArcTem basin simulator included in the TemisFlow software has been specifically designed to improve the modelling of faulted geological settings and to get a numerical representation of fault zones closer to the geological description. Here we present new developments in the simulator to compute fault properties through time as a function of available geological parameters, for single-phase 2D simulations. We have used this new prototype to model pressure evolution on a siliciclastic 2D section located onshore in the Niger Delta. The section is crossed by several normal growth faults which subdivide the basin into several sedimentary units and appear to be lateral limits of strong over-pressured zones. Faults are also thought to play a crucial role in hydrocarbons migration from the deep source rocks to shallow reservoirs. We automatically compute the Shale Gouge Ratio (SGR along the fault planes through time, as well as the fault displacement velocity. The fault core permeability is then computed as a function of the SGR, including threshold values to account for shale smear formation. Longitudinal fault fluid flow is enhanced during periods of high fault slip velocity. The method allows us to simulate both along-fault drainages during the basin history as well as overpressure building at present-day. The simulated pressures are at first order within the range of observed pressures we had at our disposal.

  6. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  7. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  8. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    Directory of Open Access Journals (Sweden)

    Yaodong Xing

    2012-08-01

    Full Text Available Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can’t be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  9. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis.

    Science.gov (United States)

    Castro, Alfonso; Sedano, Andrés A; García, Fco Javier; Villoslada, Eduardo; Villagrá, Víctor A

    2017-12-28

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica's global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam.

  10. Using Magnetics and Topography to Model Fault Splays of the Hilton Creek Fault System within the Long Valley Caldera

    Science.gov (United States)

    De Cristofaro, J. L.; Polet, J.

    2017-12-01

    The Hilton Creek Fault (HCF) is a range-bounding extensional fault that forms the eastern escarpment of California's Sierra Nevada mountain range, near the town of Mammoth Lakes. The fault is well mapped along its main trace to the south of the Long Valley Caldera (LVC), but the location and nature of its northern terminus is poorly constrained. The fault terminates as a series of left-stepping splays within the LVC, an area of active volcanism that most notably erupted 760 ka, and currently experiences continuous geothermal activity and sporadic earthquake swarms. The timing of the most recent motion on these fault splays is debated, as is the threat posed by this section of the Hilton Creek Fault. The Third Uniform California Earthquake Rupture Forecast (UCERF3) model depicts the HCF as a single strand projecting up to 12km into the LVC. However, Bailey (1989) and Hill and Montgomery-Brown (2015) have argued against this model, suggesting that extensional faulting within the Caldera has been accommodated by the ongoing volcanic uplift and thus the intracaldera section of the HCF has not experienced motion since 760ka.We intend to map the intracaldera fault splays and model their subsurface characteristics to better assess their rupture history and potential. This will be accomplished using high-resolution topography and subsurface geophysical methods, including ground-based magnetics. Preliminary work was performed using high-precision Nikon Nivo 5.C total stations to generate elevation profiles and a backpack mounted GEM GS-19 proton precession magnetometer. The initial results reveal a correlation between magnetic anomalies and topography. East-West topographic profiles show terrace-like steps, sub-meter in height, which correlate to changes in the magnetic data. Continued study of the magnetic data using Oasis Montaj 3D modeling software is planned. Additionally, we intend to prepare a high-resolution terrain model using structure-from-motion techniques

  11. Open-Switch Fault Diagnosis and Fault Tolerant for Matrix Converter with Finite Control Set-Model Predictive Control

    DEFF Research Database (Denmark)

    Peng, Tao; Dan, Hanbing; Yang, Jian

    2016-01-01

    To improve the reliability of the matrix converter (MC), a fault diagnosis method to identify single open-switch fault is proposed in this paper. The introduced fault diagnosis method is based on finite control set-model predictive control (FCS-MPC), which employs a time-discrete model of the MC...... topology and a cost function to select the best switching state for the next sampling period. The proposed fault diagnosis method is realized by monitoring the load currents and judging the switching state to locate the faulty switch. Compared to the conventional modulation strategies such as carrier......-based modulation method, indirect space vector modulation and optimum Alesina-Venturini, the FCS-MPC has known and unchanged switching state in a sampling period. It is simpler to diagnose the exact location of the open switch in MC with FCS-MPC. To achieve better quality of the output current under single open...

  12. Modeling of the fault-controlled hydrothermal ore-forming systems

    International Nuclear Information System (INIS)

    Pek, A.A.; Malkovsky, V.I.

    1993-07-01

    A necessary precondition for the formation of hydrothermal ore deposits is a strong focusing of hydrothermal flow as fluids move from the fluid source to the site of ore deposition. The spatial distribution of hydrothermal deposits favors the concept that such fluid flow focusing is controlled, for the most part, by regional faults which provide a low resistance path for hydrothermal solutions. Results of electric analog simulations, analytical solutions, and computer simulations of the fluid flow, in a fault-controlled single-pass advective system, confirm this concept. The influence of the fluid flow focusing on the heat and mass transfer in a single-pass advective system was investigated for a simplified version of the metamorphic model for the genesis of greenstone-hosted gold deposits. The spatial distribution of ore mineralization, predicted by computer simulation, is in reasonable agreement with geological observations. Computer simulations of the fault-controlled thermoconvective system revealed a complex pattern of mixing hydrothermal solutions in the model, which also simulates the development of the modern hydrothermal systems on the ocean floor. The specific feature of the model considered, is the development under certain conditions of an intra-fault convective cell that operates essentially independently of the large scale circulation. These and other results obtained during the study indicate that modeling of natural fault-controlled hydrothermal systems is instructive for the analysis of transport processes in man-made hydrothermal systems that could develop in geologic high-level nuclear waste repositories

  13. Model Based Fault Detection in a Centrifugal Pump Application

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...

  14. Investigation of faulted tunnel models by combined photoelasticity and finite element analysis

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Huang, Yuping

    1994-01-01

    Models of square and circular tunnels with short faults cutting through their surfaces are investigated by photoelasticity. These models, when duplicated by finite element analysis can predict the stress states of square or circular faulted tunnels adequately. Finite element analysis, using gap elements, may be used to investigate full size faulted tunnel system

  15. Phase response curves for models of earthquake fault dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Franović, Igor, E-mail: franovic@ipb.ac.rs [Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Kostić, Srdjan [Institute for the Development of Water Resources “Jaroslav Černi,” Jaroslava Černog 80, 11226 Belgrade (Serbia); Perc, Matjaž [Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, SI-2000 Maribor (Slovenia); CAMTP—Center for Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Klinshov, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Nekorkin, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); University of Nizhny Novgorod, 23 Prospekt Gagarina, 603950 Nizhny Novgorod (Russian Federation); Kurths, Jürgen [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)

    2016-06-15

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  16. Phase response curves for models of earthquake fault dynamics

    International Nuclear Information System (INIS)

    Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-01-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  17. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    Science.gov (United States)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this

  18. Model-Based Fault Diagnosis in Electric Drive Inverters Using Artificial Neural Network

    National Research Council Canada - National Science Library

    Masrur, Abul; Chen, ZhiHang; Zhang, Baifang; Jia, Hongbin; Murphey, Yi-Lu

    2006-01-01

    .... A normal model and various faulted models of the inverter-motor combination were developed, and voltages and current signals were generated from those models to train an artificial neural network for fault diagnosis...

  19. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip☆

    Science.gov (United States)

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-01-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290

  20. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip.

    Science.gov (United States)

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-06-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.

  1. Automatic Fault Characterization via Abnormality-Enhanced Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    2010-12-20

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help to identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.

  2. Observer and data-driven model based fault detection in Power Plant Coal Mills

    DEFF Research Database (Denmark)

    Fogh Odgaard, Peter; Lin, Bao; Jørgensen, Sten Bay

    2008-01-01

    model with motor power as the controlled variable, data-driven methods for fault detection are also investigated. Regression models that represent normal operating conditions (NOCs) are developed with both static and dynamic principal component analysis and partial least squares methods. The residual...... between process measurement and the NOC model prediction is used for fault detection. A hybrid approach, where a data-driven model is employed to derive an optimal unknown input observer, is also implemented. The three methods are evaluated with case studies on coal mill data, which includes a fault......This paper presents and compares model-based and data-driven fault detection approaches for coal mill systems. The first approach detects faults with an optimal unknown input observer developed from a simplified energy balance model. Due to the time-consuming effort in developing a first principles...

  3. Dynamics Modeling and Analysis of Local Fault of Rolling Element Bearing

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2015-01-01

    Full Text Available This paper presents a nonlinear vibration model of rolling element bearings with 5 degrees of freedom based on Hertz contact theory and relevant bearing knowledge of kinematics and dynamics. The slipping of ball, oil film stiffness, and the nonlinear time-varying stiffness of the bearing are taken into consideration in the model proposed here. The single-point local fault model of rolling element bearing is introduced into the nonlinear model with 5 degrees of freedom according to the loss of the contact deformation of ball when it rolls into and out of the local fault location. The functions of spall depth corresponding to defects of different shapes are discussed separately in this paper. Then the ode solver in Matlab is adopted to perform a numerical solution on the nonlinear vibration model to simulate the vibration response of the rolling elements bearings with local fault. The simulation signals analysis results show a similar behavior and pattern to that observed in the processed experimental signals of rolling element bearings in both time domain and frequency domain which validated the nonlinear vibration model proposed here to generate typical rolling element bearings local fault signals for possible and effective fault diagnostic algorithms research.

  4. Workflow Fault Tree Generation Through Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2014-01-01

    We present a framework for the automated generation of fault trees from models of realworld process workflows, expressed in a formalised subset of the popular Business Process Modelling and Notation (BPMN) language. To capture uncertainty and unreliability in workflows, we extend this formalism...

  5. Certain Type Turbofan Engine Whole Vibration Model with Support Looseness Fault and Casing Response Characteristics

    Directory of Open Access Journals (Sweden)

    H. F. Wang

    2014-01-01

    Full Text Available Support looseness fault is a type of common fault in aeroengine. Serious looseness fault would emerge under larger unbalanced force, which would cause excessive vibration and even lead to rubbing fault, so it is important to analyze and recognize looseness fault effectively. In this paper, based on certain type turbofan engine structural features, a rotor-support-casing whole model for certain type turbofan aeroengine is established. The rotor and casing systems are modeled by means of the finite element beam method; the support systems are modeled by lumped-mass model; the support looseness fault model is also introduced. The coupled system response is obtained by numerical integral method. In this paper, based on the casing acceleration signals, the impact characteristics of symmetrical stiffness and asymmetric stiffness models are analyzed, finding that the looseness fault would lead to the longitudinal asymmetrical characteristics of acceleration time domain wave and the multiple frequency characteristics, which is consistent with the real trial running vibration signals. Asymmetric stiffness looseness model is verified to be fit for aeroengine looseness fault model.

  6. A Weighted Deep Representation Learning Model for Imbalanced Fault Diagnosis in Cyber-Physical Systems

    Science.gov (United States)

    Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang

    2018-01-01

    Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods. PMID:29621131

  7. A Weighted Deep Representation Learning Model for Imbalanced Fault Diagnosis in Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Zhenyu Wu

    2018-04-01

    Full Text Available Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM. However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods.

  8. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi

    2018-02-12

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a statistical approach. Specifically, a simulation model that mimics the theoretical performances of the inspected PV system is designed. Residuals, which are the difference between the measured and estimated output data, are used as a fault indicator. Indeed, residuals are used as the input for the Multivariate CUmulative SUM (MCUSUM) algorithm to detect potential faults. We evaluated the proposed method by using data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  9. Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications

    Science.gov (United States)

    Nasir, Ali

    Spacecraft operate in a harsh environment, are costly to launch, and experience unavoidable communication delay and bandwidth constraints. These factors motivate the need for effective onboard mission and fault management. This dissertation presents an integrated framework to optimize science goal achievement while identifying and managing encountered faults. Goal-related tasks are defined by pointing the spacecraft instrumentation toward distant targets of scientific interest. The relative value of science data collection is traded with risk of failures to determine an optimal policy for mission execution. Our major innovation in fault detection and reconfiguration is to incorporate fault information obtained from two types of spacecraft models: one based on the dynamics of the spacecraft and the second based on the internal composition of the spacecraft. For fault reconfiguration, we consider possible changes in both dynamics-based control law configuration and the composition-based switching configuration. We formulate our problem as a stochastic sequential decision problem or Markov Decision Process (MDP). To avoid the computational complexity involved in a fully-integrated MDP, we decompose our problem into multiple MDPs. These MDPs include planning MDPs for different fault scenarios, a fault detection MDP based on a logic-based model of spacecraft component and system functionality, an MDP for resolving conflicts between fault information from the logic-based model and the dynamics-based spacecraft models" and the reconfiguration MDP that generates a policy optimized over the relative importance of the mission objectives versus spacecraft safety. Approximate Dynamic Programming (ADP) methods for the decomposition of the planning and fault detection MDPs are applied. To show the performance of the MDP-based frameworks and ADP methods, a suite of spacecraft attitude planning case studies are described. These case studies are used to analyze the content and

  10. A Real-Time Fault Management Software System for Distributed Environments, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault Management (FM) is critical to mission operations and particularly so for complex instruments – such as those used for aircraft and spacecraft. FM software and...

  11. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    Science.gov (United States)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  12. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Lindhe, Andreas; Norberg, Tommy; Rosén, Lars

    2012-01-01

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  13. Experimental Modeling of Dynamic Shallow Dip-Slip Faulting

    Science.gov (United States)

    Uenishi, K.

    2010-12-01

    In our earlier study (AGU 2005, SSJ 2005, JPGU 2006), using a finite difference technique, we have conducted some numerical simulations related to the source dynamics of shallow dip-slip earthquakes, and suggested the possibility of the existence of corner waves, i.e., shear waves that carry concentrated kinematic energy and generate extremely strong particle motions on the hanging wall of a nonvertical fault. In the numerical models, a dip-slip fault is located in a two-dimensional, monolithic linear elastic half space, and the fault plane dips either vertically or 45 degrees. We have investigated the seismic wave field radiated by crack-like rupture of this straight fault. If the fault rupture, initiated at depth, arrests just below or reaches the free surface, four Rayleigh-type pulses are generated: two propagating along the free surface into the opposite directions to the far field, the other two moving back along the ruptured fault surface (interface) downwards into depth. These downward interface pulses may largely control the stopping phase of the dynamic rupture, and in the case the fault plane is inclined, on the hanging wall the interface pulse and the outward-moving Rayleigh surface pulse interact with each other and the corner wave is induced. On the footwall, the ground motion is dominated simply by the weaker Rayleigh pulse propagating along the free surface because of much smaller interaction between this Rayleigh and the interface pulse. The generation of the downward interface pulses and corner wave may play a crucial role in understanding the effects of the geometrical asymmetry on the strong motion induced by shallow dip-slip faulting, but it has not been well recognized so far, partly because those waves are not expected for a fault that is located and ruptures only at depth. However, the seismological recordings of the 1999 Chi-Chi, Taiwan, the 2004 Niigata-ken Chuetsu, Japan, earthquakes as well as a more recent one in Iwate-Miyagi Inland

  14. Fault Rupture Model of the 2016 Gyeongju, South Korea, Earthquake and Its Implication for the Underground Fault System

    Science.gov (United States)

    Uchide, Takahiko; Song, Seok Goo

    2018-03-01

    The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.

  15. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  16. Ductile bookshelf faulting: A new kinematic model for Cenozoic deformation in northern Tibet

    Science.gov (United States)

    Zuza, A. V.; Yin, A.

    2013-12-01

    It has been long recognized that the most dominant features on the northern Tibetan Plateau are the >1000 km left-slip strike-slip faults (e.g., the Atyn Tagh, Kunlun, and Haiyuan faults). Early workers used the presence of these faults, especially the Kunlun and Haiyuan faults, as evidence for eastward lateral extrusion of the plateau, but their low documented offsets--100s of km or less--can not account for the 2500 km of convergence between India and Asia. Instead, these faults may result from north-south right-lateral simple shear due to the northward indentation of India, which leads to the clockwise rotation of the strike-slip faults and left-lateral slip (i.e., bookshelf faulting). With this idea, deformation is still localized on discrete fault planes, and 'microplates' or blocks rotate and/or translate with little internal deformation. As significant internal deformation occurs across northern Tibet within strike-slip-bounded domains, there is need for a coherent model to describe all of the deformational features. We also note the following: (1) geologic offsets and Quaternary slip rates of both the Kunlun and Haiyuan faults vary along strike and appear to diminish to the east, (2) the faults appear to kinematically link with thrust belts (e.g., Qilian Shan, Liupan Shan, Longmen Shan, and Qimen Tagh) and extensional zones (e.g., Shanxi, Yinchuan, and Qinling grabens), and (3) temporal relationships between the major deformation zones and the strike-slip faults (e.g., simultaneous enhanced deformation and offset in the Qilian Shan and Liupan Shan, and the Haiyuan fault, at 8 Ma). We propose a new kinematic model to describe the active deformation in northern Tibet: a ductile-bookshelf-faulting model. With this model, right-lateral simple shear leads to clockwise vertical axis rotation of the Qaidam and Qilian blocks, and left-slip faulting. This motion creates regions of compression and extension, dependent on the local boundary conditions (e.g., rigid

  17. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    Directory of Open Access Journals (Sweden)

    Kaijuan Yuan

    2016-01-01

    Full Text Available Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  18. 3D Strain Modelling of Tear Fault Analogues

    Science.gov (United States)

    Hindle, D.; Vietor, T.

    2005-12-01

    Tear faults can be described as vertical discontinuities, with near fault parallel displacements terminating on some sort of shallow detachment. As such, they are difficult to study in "cross section" i.e. 2 dimensions as is often the case for fold-thrust systems. Hence, little attempt has been made to model the evolution of strain around tear faults and the processes of strain localisation in such structures due to the necessity of describing these systems in 3 dimensions and the problems this poses for both numerical and analogue modelling. Field studies suggest that strain in such regions can be distributed across broad zones on minor tear systems, which are often not easily mappable. Such strain is probably assumed to be due to distributed strain and to displacement gradients which are themselves necessary for the initiation of the tear itself. We present a numerical study of the effects of a sharp, basal discontinutiy parallel to the transport direction in a shortening wedge of material. The discontinuity is represented by two adjacent basal surfaces with strongly contrasting (0.5 and 0.05) friction coefficient. The material is modelled using PFC3D distinct element software for simulating granular material, whose properties are chosen to simulate upper crustal, sedimentary rock. The model geometry is a rectangular bounding box, 2km x 1km, and 0.35-0.5km deep, with a single, driving wall of constant velocity. We show the evolution of strain in the model in horizontal and vertical sections, and interpret strain localization as showing the spontaneous development of tear fault like features. The strain field in the model is asymmetrical, rotated towards the strong side of the model. Strain increments seem to oscillate in time, suggesting achievement of a steady state. We also note that our model cannot be treated as a critical wedge, since the 3rd dimension and the lateral variations of strength rule out this type of 2D approximation.

  19. Dynamic Models of Earthquake Rupture along branch faults of the Eastern San Gorgonio Pass Region in CA using Complex Fault Structure

    Science.gov (United States)

    Douilly, R.; Oglesby, D. D.; Cooke, M. L.; Beyer, J. L.

    2017-12-01

    Compilation of geomorphic and paleoseismic data have illustrated that the right-lateral Coachella segment of the southern San Andreas Fault is past its average recurrence time period. On its western edge, this fault segment is split into two branches: the Mission Creek strand, and the Banning fault strand, of the San Andreas. Depending on how rupture propagates through this region, there is the possibility of a through-going rupture that could lead to the channeling of damaging seismic energy into the Los Angeles Basin. The fault structures and rupture scenarios on these two strands are potentially very different, so it is important to determine which strand is a more likely rupture path, and under which circumstances rupture will take either one. In this study, we focus on the effect of different assumptions about fault geometry and stress pattern on the rupture process to test those scenarios and thus investigate the most likely path of a rupture that starts on the Coachella segment. We consider two types of fault geometry based on the SCEC Community Fault Model and create a 3D finite element mesh. These two meshes are then incorporated into the finite element method code FaultMod to compute a physical model for the rupture dynamics. We use the slip-weakening friction law, and we consider different assumptions of background stress such as constant tractions, regional stress regimes of different orientations, heterogeneous off-fault stresses and the results of long-term stressing rates from quasi-static crustal deformation models that consider time since last event on each fault segment. Both the constant and regional stress distribution show that it is more likely for the rupture to branch from the Coachella segment to the Mission Creek compared to the Banning fault segment. For the regional stress distribution, we encounter cases of super-shear rupture for one type of fault geometry and sub-shear rupture for the other one. The fault connectivity at this branch

  20. Fuzzy model-based observers for fault detection in CSTR.

    Science.gov (United States)

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  2. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  3. TWT transmitter fault prediction based on ANFIS

    Science.gov (United States)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  4. Finite element models of earthquake cycles in mature strike-slip fault zones

    Science.gov (United States)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a

  5. Research on Fault Diagnosis of HTR-PM Based on Multilevel Flow Model

    International Nuclear Information System (INIS)

    Zhang Yong; Zhou Yangping

    2014-01-01

    In this paper, we focus on the application of Multilevel Flow Model (MFM) in the automatic real-time fault diagnosis of High Temperature Gas-cooled Reactor Pebble-bed Module (HTR-PM) accidents. In the MFM, the plant process is described abstractly in function level by mass, energy and information flows, which reveal the interaction between different components and capacitate the causal reasoning between functions according to the flow properties. Thus, in the abnormal status, a goal-function-component oriented fault diagnosis can be performed with the model at a very quick speed and abnormal alarms can be also precisely explained by the reasoning relationship of the model. By using MFM, a fault diagnosis model of HTR-PM plant is built, and the detailed process of fault diagnosis is also shown by the flowcharts. Due to lack of simulation data about HTR-PM, experiments are not conducted to evaluate the fault diagnosis performance, but analysis of algorithm feasibility and complexity shows that the diagnosis system will have a good ability to detect and diagnosis accidents timely. (author)

  6. A Model of Intelligent Fault Diagnosis of Power Equipment Based on CBR

    Directory of Open Access Journals (Sweden)

    Gang Ma

    2015-01-01

    Full Text Available Nowadays the demand of power supply reliability has been strongly increased as the development within power industry grows rapidly. Nevertheless such large demand requires substantial power grid to sustain. Therefore power equipment’s running and testing data which contains vast information underpins online monitoring and fault diagnosis to finally achieve state maintenance. In this paper, an intelligent fault diagnosis model for power equipment based on case-based reasoning (IFDCBR will be proposed. The model intends to discover the potential rules of equipment fault by data mining. The intelligent model constructs a condition case base of equipment by analyzing the following four categories of data: online recording data, history data, basic test data, and environmental data. SVM regression analysis was also applied in mining the case base so as to further establish the equipment condition fingerprint. The running data of equipment can be diagnosed by such condition fingerprint to detect whether there is a fault or not. Finally, this paper verifies the intelligent model and three-ratio method based on a set of practical data. The resulting research demonstrates that this intelligent model is more effective and accurate in fault diagnosis.

  7. Assurance of Fault Management: Risk-Significant Adverse Condition Awareness

    Science.gov (United States)

    Fitz, Rhonda

    2016-01-01

    Fault Management (FM) systems are ranked high in risk-based assessment of criticality within flight software, emphasizing the importance of establishing highly competent domain expertise to provide assurance for NASA projects, especially as spaceflight systems continue to increase in complexity. Insight into specific characteristics of FM architectures seen embedded within safety- and mission-critical software systems analyzed by the NASA Independent Verification Validation (IVV) Program has been enhanced with an FM Technical Reference (TR) suite. Benefits are aimed beyond the IVV community to those that seek ways to efficiently and effectively provide software assurance to reduce the FM risk posture of NASA and other space missions. The identification of particular FM architectures, visibility, and associated IVV techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. The role FM has with regard to overall asset protection of flight software systems is being addressed with the development of an adverse condition (AC) database encompassing flight software vulnerabilities.Identification of potential off-nominal conditions and analysis to determine how a system responds to these conditions are important aspects of hazard analysis and fault management. Understanding what ACs the mission may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Research efforts sponsored by NASAs Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs, and allowing queries based on project, mission

  8. Vibration model of rolling element bearings in a rotor-bearing system for fault diagnosis

    Science.gov (United States)

    Cong, Feiyun; Chen, Jin; Dong, Guangming; Pecht, Michael

    2013-04-01

    Rolling element bearing faults are among the main causes of breakdown in rotating machines. In this paper, a rolling bearing fault model is proposed based on the dynamic load analysis of a rotor-bearing system. The rotor impact factor is taken into consideration in the rolling bearing fault signal model. The defect load on the surface of the bearing is divided into two parts, the alternate load and the determinate load. The vibration response of the proposed fault signal model is investigated and the fault signal calculating equation is derived through dynamic and kinematic analysis. Outer race and inner race fault simulations are realized in the paper. The simulation process includes consideration of several parameters, such as the gravity of the rotor-bearing system, the imbalance of the rotor, and the location of the defect on the surface. The simulation results show that different amplitude contributions of the alternate load and determinate load will cause different envelope spectrum expressions. The rotating frequency sidebands will occur in the envelope spectrum in addition to the fault characteristic frequency. This appearance of sidebands will increase the difficulty of fault recognition in intelligent fault diagnosis. The experiments given in the paper have successfully verified the proposed signal model simulation results. The test rig design of the rotor bearing system simulated several operating conditions: (1) rotor bearing only; (2) rotor bearing with loader added; (3) rotor bearing with loader and rotor disk; and (4) bearing fault simulation without rotor influence. The results of the experiments have verified that the proposed rolling bearing signal model is important to the rolling bearing fault diagnosis of rotor-bearing systems.

  9. Systems analysis approach to probabilistic modeling of fault trees

    International Nuclear Information System (INIS)

    Bartholomew, R.J.; Qualls, C.R.

    1985-01-01

    A method of probabilistic modeling of fault tree logic combined with stochastic process theory (Markov modeling) has been developed. Systems are then quantitatively analyzed probabilistically in terms of their failure mechanisms including common cause/common mode effects and time dependent failure and/or repair rate effects that include synergistic and propagational mechanisms. The modeling procedure results in a state vector set of first order, linear, inhomogeneous, differential equations describing the time dependent probabilities of failure described by the fault tree. The solutions of this Failure Mode State Variable (FMSV) model are cumulative probability distribution functions of the system. A method of appropriate synthesis of subsystems to form larger systems is developed and applied to practical nuclear power safety systems

  10. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    Science.gov (United States)

    Thomas, M. Y.; Bhat, H. S.

    2017-12-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  11. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    Science.gov (United States)

    Thomas, Marion Y.; Bhat, Harsha S.

    2018-05-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  12. Transposing an active fault database into a seismic hazard fault model for nuclear facilities. Pt. 1. Building a database of potentially active faults (BDFA) for metropolitan France

    Energy Technology Data Exchange (ETDEWEB)

    Jomard, Herve; Cushing, Edward Marc; Baize, Stephane; Chartier, Thomas [IRSN - Institute of Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); Palumbo, Luigi; David, Claire [Neodyme, Joue les Tours (France)

    2017-07-01

    The French Institute for Radiation Protection and Nuclear Safety (IRSN), with the support of the Ministry of Environment, compiled a database (BDFA) to define and characterize known potentially active faults of metropolitan France. The general structure of BDFA is presented in this paper. BDFA reports to date 136 faults and represents a first step toward the implementation of seismic source models that would be used for both deterministic and probabilistic seismic hazard calculations. A robustness index was introduced, highlighting that less than 15% of the database is controlled by reasonably complete data sets. An example of transposing BDFA into a fault source model for PSHA (probabilistic seismic hazard analysis) calculation is presented for the Upper Rhine Graben (eastern France) and exploited in the companion paper (Chartier et al., 2017, hereafter Part 2) in order to illustrate ongoing challenges for probabilistic fault-based seismic hazard calculations.

  13. Modeling, control and fault diagnosis of an isolated wind energy conversion system with a self-excited induction generator subject to electrical faults

    International Nuclear Information System (INIS)

    Attoui, Issam; Omeiri, Amar

    2014-01-01

    Highlights: • A new model of the SEIG is developed to simulate both the rotor and stator faults. • This model takes iron loss, main flux and cross flux saturation into account. • A new control strategy based on Fractional-Order Controller (FOC) is proposed. • The control strategy is developed for the control of the wind turbine speed. • An on-line diagnostic procedure based on the stator currents analysis is presented. - Abstract: In this paper, a contribution to modeling and fault diagnosis of rotor and stator faults of a Self-Excited Induction Generator (SEIG) in an Isolated Wind Energy Conversion System (IWECS) is proposed. In order to control the speed of the wind turbine, while basing on the linear model of wind turbine system about a specified operating point, a new Fractional-Order Controller (FOC) with a simple and practical design method is proposed. The FOC ensures the stability of the nonlinear system in both healthy and faulty conditions. Furthermore, in order to detect the stator and rotor faults in the squirrel-cage self-excited induction generator, an on-line fault diagnostic technique based on the spectral analysis of stator currents of the squirrel-cage SEIG by a Fast Fourier Transform (FFT) algorithm is used. Additionally, a generalized model of the squirrel-cage SEIG is developed to simulate both the rotor and stator faults taking iron loss, main flux and cross flux saturation into account. The efficiencies of generalized model, control strategy and diagnostic procedure are illustrated with simulation results

  14. Analysis of Fault Spacing in Thrust-Belt Wedges Using Numerical Modeling

    Science.gov (United States)

    Regensburger, P. V.; Ito, G.

    2017-12-01

    Numerical modeling is invaluable in studying the mechanical processes governing the evolution of geologic features such as thrust-belt wedges. The mechanisms controlling thrust fault spacing in wedges is not well understood. Our numerical model treats the thrust belt as a visco-elastic-plastic continuum and uses a finite-difference, marker-in-cell method to solve for conservation of mass and momentum. From these conservation laws, stress is calculated and Byerlee's law is used to determine the shear stress required for a fault to form. Each model consists of a layer of crust, initially 3-km-thick, carried on top of a basal décollement, which moves at a constant speed towards a rigid backstop. A series of models were run with varied material properties, focusing on the angle of basal friction at the décollement, the angle of friction within the crust, and the cohesion of the crust. We investigate how these properties affected the spacing between thrusts that have the most time-integrated history of slip and therefore have the greatest effect on the large-scale undulations in surface topography. The surface position of these faults, which extend through most of the crustal layer, are identifiable as local maxima in positive curvature of surface topography. Tracking the temporal evolution of faults, we find that thrust blocks are widest when they first form at the front of the wedge and then they tend to contract over time as more crustal material is carried to the wedge. Within each model, thrust blocks form with similar initial widths, but individual thrust blocks develop differently and may approach an asymptotic width over time. The median of thrust block widths across the whole wedge tends to decrease with time. Median fault spacing shows a positive correlation with both wedge cohesion and internal friction. In contrast, median fault spacing exhibits a negative correlation at small angles of basal friction (laws that can be used to predict fault spacing in

  15. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    Science.gov (United States)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  16. Fault Modeling and Testing for Analog Circuits in Complex Space Based on Supply Current and Output Voltage

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available This paper deals with the modeling of fault for analog circuits. A two-dimensional (2D fault model is first proposed based on collaborative analysis of supply current and output voltage. This model is a family of circle loci on the complex plane, and it simplifies greatly the algorithms for test point selection and potential fault simulations, which are primary difficulties in fault diagnosis of analog circuits. Furthermore, in order to reduce the difficulty of fault location, an improved fault model in three-dimensional (3D complex space is proposed, which achieves a far better fault detection ratio (FDR against measurement error and parametric tolerance. To address the problem of fault masking in both 2D and 3D fault models, this paper proposes an effective design for testability (DFT method. By adding redundant bypassing-components in the circuit under test (CUT, this method achieves excellent fault isolation ratio (FIR in ambiguity group isolation. The efficacy of the proposed model and testing method is validated through experimental results provided in this paper.

  17. Fault Tolerant Control Using Gaussian Processes and Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Yang Xiaoke

    2015-03-01

    Full Text Available Essential ingredients for fault-tolerant control are the ability to represent system behaviour following the occurrence of a fault, and the ability to exploit this representation for deciding control actions. Gaussian processes seem to be very promising candidates for the first of these, and model predictive control has a proven capability for the second. We therefore propose to use the two together to obtain fault-tolerant control functionality. Our proposal is illustrated by several reasonably realistic examples drawn from flight control.

  18. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

  19. Dynamic rupture models of subduction zone earthquakes with off-fault plasticity

    Science.gov (United States)

    Wollherr, S.; van Zelst, I.; Gabriel, A. A.; van Dinther, Y.; Madden, E. H.; Ulrich, T.

    2017-12-01

    Modeling tsunami-genesis based on purely elastic seafloor displacement typically underpredicts tsunami sizes. Dynamic rupture simulations allow to analyse whether plastic energy dissipation is a missing rheological component by capturing the complex interplay of the rupture front, emitted seismic waves and the free surface in the accretionary prism. Strike-slip models with off-fault plasticity suggest decreasing rupture speed and extensive plastic yielding mainly at shallow depths. For simplified subduction geometries inelastic deformation on the verge of Coulomb failure may enhance vertical displacement, which in turn favors the generation of large tsunamis (Ma, 2012). However, constraining appropriate initial conditions in terms of fault geometry, initial fault stress and strength remains challenging. Here, we present dynamic rupture models of subduction zones constrained by long-term seismo-thermo-mechanical modeling (STM) without any a priori assumption of regions of failure. The STM model provides self-consistent slab geometries, as well as stress and strength initial conditions which evolve in response to tectonic stresses, temperature, gravity, plasticity and pressure (van Dinther et al. 2013). Coseismic slip and coupled seismic wave propagation is modelled using the software package SeisSol (www.seissol.org), suited for complex fault zone structures and topography/bathymetry. SeisSol allows for local time-stepping, which drastically reduces the time-to-solution (Uphoff et al., 2017). This is particularly important in large-scale scenarios resolving small-scale features, such as the shallow angle between the megathrust fault and the free surface. Our dynamic rupture model uses a Drucker-Prager plastic yield criterion and accounts for thermal pressurization around the fault mimicking the effect of pore pressure changes due to frictional heating. We first analyze the influence of this rheology on rupture dynamics and tsunamigenic properties, i.e. seafloor

  20. Stochastic Modeling and Simulation of Near-Fault Ground Motions for Performance-Based Earthquake Engineering

    OpenAIRE

    Dabaghi, Mayssa

    2014-01-01

    A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as ...

  1. Modeling and Simulation of Transient Fault Response at Lillgrund Wind Farm when Subjected to Faults in the Connecting 130 kV Grid

    Energy Technology Data Exchange (ETDEWEB)

    Eliasson, Anders; Isabegovic, Emir

    2009-07-01

    The purpose of this thesis was to investigate what type of faults in the connecting grid should be dimensioning for future wind farms. An investigation of over and under voltages at the main transformer and the turbines inside Lillgrund wind farm was the main goal. The results will be used in the planning stage of future wind farms when performing insulation coordination and determining the protection settings. A model of the Lillgrund wind farm and a part of the connecting 130 kV grid were built in PSCAD/EMTDC. The farm consists of 48 Siemens SWT-2.3-93 2.3 MW wind turbines with full power converters. The turbines were modeled as controllable current sources providing a constant active power output up to the current limit of 1.4 pu. The transmission lines and cables were modeled as frequency dependent (phase) models. The load flows and bus voltages were verified towards a PSS/E model and the transient response was verified towards measuring data from two faults, a line to line fault in the vicinity of Barsebaeck (BBK) and a single line-to-ground fault close to Bunkeflo (BFO) substation. For the simulation, three phase to ground, single line to ground and line to line faults were applied at different locations in the connecting grid and the phase to ground voltages at different buses in the connecting grid and at turbines were studied. These faults were applied for different configurations of the farm. For single line to ground faults, the highest over voltage on a turbine was 1.22 pu (32.87 kV) due to clearing of a fault at BFO (the PCC). For line to line faults, the highest over voltage on a turbine was 1.59 pu (42.83 kV) at the beginning of a fault at KGE one bus away from BFO. Both these cases were when all radials were connected and the turbines ran at full power. The highest over voltage observed at Lillgrund was 1.65 pu (44.45 kV). This over voltage was caused by a three phase to ground fault applied at KGE and occurred at the beginning of the fault and when

  2. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  3. Bond graphs for modelling, control and fault diagnosis of engineering systems

    CERN Document Server

    2017-01-01

    This book presents theory and latest application work in Bond Graph methodology with a focus on: • Hybrid dynamical system models, • Model-based fault diagnosis, model-based fault tolerant control, fault prognosis • and also addresses • Open thermodynamic systems with compressible fluid flow, • Distributed parameter models of mechanical subsystems. In addition, the book covers various applications of current interest ranging from motorised wheelchairs, in-vivo surgery robots, walking machines to wind-turbines.The up-to-date presentation has been made possible by experts who are active members of the worldwide bond graph modelling community. This book is the completely revised 2nd edition of the 2011 Springer compilation text titled Bond Graph Modelling of Engineering Systems – Theory, Applications and Software Support. It extends the presentation of theory and applications of graph methodology by new developments and latest research results. Like the first edition, this book addresses readers in a...

  4. Logical Specification and Analysis of Fault Tolerant Systems through Partial Model Checking

    NARCIS (Netherlands)

    Gnesi, S.; Etalle, Sandro; Mukhopadhyay, S.; Lenzini, Gabriele; Lenzini, G.; Martinelli, F.; Roychoudhury, A.

    2003-01-01

    This paper presents a framework for a logical characterisation of fault tolerance and its formal analysis based on partial model checking techniques. The framework requires a fault tolerant system to be modelled using a formal calculus, here the CCS process algebra. To this aim we propose a uniform

  5. Model based Fault Detection and Isolation for Driving Motors of a Ground Vehicle

    Directory of Open Access Journals (Sweden)

    Young-Joon Kim

    2016-04-01

    Full Text Available This paper proposes model based current sensor and position sensor fault detection and isolation algorithm for driving motor of In-wheel independent drive electric vehicle. From low level perspective, fault diagnosis conducted and analyzed to enhance robustness and stability. Composing state equation of interior permanent magnet synchronous motor (IPMSM, current sensor fault and position sensor fault diagnosed with parity equation. Validation and usefulness of algorithm confirmed based on IPMSM fault occurrence simulation data.

  6. Waste Management Fault Tree Data Bank (WM): 1992 status report

    International Nuclear Information System (INIS)

    Baughman, D.F.; Hang, P.; Townsend, C.S.

    1993-01-01

    The Risk Assessment Methodology Group (RAM) of the Nuclear Process Safety Research Section (NPSR) maintains a compilation of incidents that have occurred in the Waste Management facilities. The Waste Management Fault Tree Data Bank (WM) contains more than 35,000 entries ranging from minor equipment malfunctions to incidents with significant potential for injury or contamination of personnel. This report documents the status of the WM data bank including: availability, training, source of data, search options, and usage, to which these data have been applied. Periodic updates to this memorandum are planned as additional data or applications are acquired

  7. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After

  8. A rheologically layered three-dimensional model of the San Andreas fault in central and southern California

    Science.gov (United States)

    Williams, Charles A.; Richardson, Randall M.

    1991-01-01

    The effects of rheological parameters and the fault slip distribution on the horizontal and vertical deformation in the vicinity of the fault are investigated using 3D kinematic finite element models of the San Andreas fault in central and southern California. It is shown that fault models with different rheological stratification schemes and slip distributions predict characteristic deformation patterns. Models that do not include aseismic slip below the fault locking depth predict deformation patterns that are strongly dependent on time since the last earthquake, while models that incorporate the aseismic slip below the locking depth depend on time to a significantly lesser degree.

  9. Electric machines modeling, condition monitoring, and fault diagnosis

    CERN Document Server

    Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun

    2012-01-01

    With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi

  10. A Fault Diagnosis Approach for Gears Based on IMF AR Model and SVM

    Directory of Open Access Journals (Sweden)

    Yu Yang

    2008-05-01

    Full Text Available An accurate autoregressive (AR model can reflect the characteristics of a dynamic system based on which the fault feature of gear vibration signal can be extracted without constructing mathematical model and studying the fault mechanism of gear vibration system, which are experienced by the time-frequency analysis methods. However, AR model can only be applied to stationary signals, while the gear fault vibration signals usually present nonstationary characteristics. Therefore, empirical mode decomposition (EMD, which can decompose the vibration signal into a finite number of intrinsic mode functions (IMFs, is introduced into feature extraction of gear vibration signals as a preprocessor before AR models are generated. On the other hand, by targeting the difficulties of obtaining sufficient fault samples in practice, support vector machine (SVM is introduced into gear fault pattern recognition. In the proposed method in this paper, firstly, vibration signals are decomposed into a finite number of intrinsic mode functions, then the AR model of each IMF component is established; finally, the corresponding autoregressive parameters and the variance of remnant are regarded as the fault characteristic vectors and used as input parameters of SVM classifier to classify the working condition of gears. The experimental analysis results show that the proposed approach, in which IMF AR model and SVM are combined, can identify working condition of gears with a success rate of 100% even in the case of smaller number of samples.

  11. Adaptive Fault-Tolerant Routing in 2D Mesh with Cracky Rectangular Model

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2014-01-01

    Full Text Available This paper mainly focuses on routing in two-dimensional mesh networks. We propose a novel faulty block model, which is cracky rectangular block, for fault-tolerant adaptive routing. All the faulty nodes and faulty links are surrounded in this type of block, which is a convex structure, in order to avoid routing livelock. Additionally, the model constructs the interior spanning forest for each block in order to keep in touch with the nodes inside of each block. The procedure for block construction is dynamically and totally distributed. The construction algorithm is simple and ease of implementation. And this is a fully adaptive block which will dynamically adjust its scale in accordance with the situation of networks, either the fault emergence or the fault recovery, without shutdown of the system. Based on this model, we also develop a distributed fault-tolerant routing algorithm. Then we give the formal proof for this algorithm to guarantee that messages will always reach their destinations if and only if the destination nodes keep connecting with these mesh networks. So the new model and routing algorithm maximize the availability of the nodes in networks. This is a noticeable overall improvement of fault tolerability of the system.

  12. Evolution of strike-slip fault systems and associated geomorphic structures. Model test

    International Nuclear Information System (INIS)

    Ueta, Keichi

    2003-01-01

    Sandbox experiments were performed to investigate evolution of fault systems and its associated geomorphic structures caused by strike-slip motion on basement faults. A 200 cm long, 40 cm wide, 25 cm high sandbox was used in a strike-slip fault model test. Computerized X-ray tomography applied to the sandbox experiments made it possible to analyze the kinematic evaluation, as well as the three-dimensional geometry, of the faults. The deformation of the sand pack surface was analyzed by use of a laser method 3D scanner, which is a three-dimensional noncontact surface profiling instrument. A comparison of the experimental results with natural cases of active faults reveals the following: In the left-lateral strike-slip fault experiments, the deformation of the sand pack with increasing basement displacement is observed as follows. 1) In three dimensions, the right-stepping shears that have a cirque'/'shell'/'shipbody' shape develop on both sides of the basement fault. The shears on one side of the basement fault join those on the other side, resulting in helicoidal shaped shear surfaces. Shears reach the surface of the sand near or above the basement fault and en echelon Riedel shears are observed at the surface of the sand. The region between two Riedels is always an up-squeezed block. 2) lower-angle shears generally branch off from the first Riedel shears. 3) Pressure ridges develop within the zone defined by the right-stepping helicoidal shaped lower-angle shears. 4) Grabens develop between the pressure ridges. 5) Y-shears offset the pressure ridges. 6) With displacement concentrated on the central throughgoing fault zone, a liner trough developed directly above the basement fault. R1 shear and P foliation are observed in the liner trough. Such evolution of the shears and its associated structures in the fault model tests agrees well with that of strike-slip fault systems and its associated geomorphic structures. (author)

  13. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  14. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  15. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  16. Real-Time Risk and Fault Management in the Mission Evaluation Room for the International Space Station

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.; Novack, S.D.

    2003-05-30

    Effective anomaly resolution in the Mission Evaluation Room (MER) of the International Space Station (ISS) requires consideration of risk in the process of identifying faults and developing corrective actions. Risk models such as fault trees from the ISS Probabilistic Risk Assessment (PRA) can be used to support anomaly resolution, but the functionality required goes significantly beyond what the PRA could provide. Methods and tools are needed that can systematically guide the identification of root causes for on-orbit anomalies, and to develop effective corrective actions that address the event and its consequences without undue risk to the crew or the mission. In addition, an overall information management framework is needed so that risk can be systematically incorporated in the process, and effectively communicated across all the disciplines and levels of management within the space station program. The commercial nuclear power industry developed such a decision making framework, known as the critical safety function approach, to guide emergency response following the accident at Three Mile Island in 1979. This report identifies new methods, tools, and decision processes that can be used to enhance anomaly resolution in the ISS Mission Evaluation Room. Current anomaly resolution processes were reviewed to identify requirements for effective real-time risk and fault management. Experience gained in other domains, especially the commercial nuclear power industry, was reviewed to identify applicable methods and tools. Recommendations were developed for next-generation tools to support MER anomaly resolution, and a plan for implementing the recommendations was formulated. The foundation of the proposed tool set will be a ''Mission Success Framework'' designed to integrate and guide the anomaly resolution process, and to facilitate consistent communication across disciplines while focusing on the overriding importance of mission success.

  17. Real-Time Risk and Fault Management in the Mission Evaluation Room of the International Space Station

    Energy Technology Data Exchange (ETDEWEB)

    William R. Nelson; Steven D. Novack

    2003-05-01

    Effective anomaly resolution in the Mission Evaluation Room (MER) of the International Space Station (ISS) requires consideration of risk in the process of identifying faults and developing corrective actions. Risk models such as fault trees from the ISS Probablistic Risk Assessment (PRA) can be used to support anomaly resolution, but the functionality required goes significantly beyond what the PRA could provide. Methods and tools are needed that can systematically guide the identification of root causes for on-orbit anomalies, and to develop effective corrective actions that address the event and its consequences without undue risk to the crew or the mission. In addition, an overall information management framework is needed so that risk can be systematically incorporated in the process, and effectively communicated across all the disciplines and levels of management within the space station program. The commercial nuclear power industry developed such a decision making framework, known as the critical safety function approach, to guide emergency response following the accident at Three Mile Island in 1979. This report identifies new methods, tools, and decision processes that can be used to enhance anomaly resolution in the ISS Mission Evaluation Room. Current anomaly resolution processes were reviewed to identify requirements for effective real-time risk and fault management. Experience gained in other domains, especially the commercial nuclear power industry, was reviewed to identify applicable methods and tools. Recommendations were developed for next-generation tools to support MER anomaly resolution, and a plan for implementing the recommendations was formulated. The foundation of the proposed toolset will be a "Mission Success Framework" designed to integrate and guide the anomaly resolution process, and to facilitate consistent communication across disciplines while focusing on the overriding importance of mission success.

  18. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  19. Analysis and implementation of power management and control strategy for six-phase multilevel ac drive system in fault condition

    Directory of Open Access Journals (Sweden)

    Sanjeevikumar Padmanaban

    2016-03-01

    Full Text Available This research article exploits the power management algorithm in post-fault conditions for a six-phase (quad multilevel inverter. The drive circuit consists of four 2-level, three-phase voltage source inverter (VSI supplying a six-phase open-end windings motor or/impedance load, with circumstantial failure of one VSI investigated. A simplified level-shifted pulse-width modulation (PWM algorithm is developed to modulate each couple of three-phase VSI as 3-level output voltage generators in normal operation. The total power of the whole ac drive is shared equally among the four isolated DC sources. The developed post-fault algorithm is applied when there is a fault by one VSI and the load is fed from the remaining three healthy VSIs. In faulty conditions the multilevel outputs are reduced from 3-level to 2-level, but still the system propagates with degraded power. Numerical simulation modelling and experimental tests have been carried out with proposed post-fault control algorithm with three-phase open-end (asymmetrical induction motor/R-L impedance load. A complete set of simulation and experimental results provided in this paper shows close agreement with the developed theoretical background.

  20. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin

    2016-04-06

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  1. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin; Shearer, Peter; Ampuero, Jean‐Paul; Lay, Thorne

    2016-01-01

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  2. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  3. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    Science.gov (United States)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  4. Simulation model of a transient fault controller for an active-stall wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Jauch, C.; Soerensen, P.; Bak Jensen, B.

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operation in case of such faults. The design of the controller is described and its performance assessed by simulations. The control strategies are explained and the behaviour of the turbine discussed. (author)

  5. Triggered dynamics in a model of different fault creep regimes.

    Science.gov (United States)

    Kostić, Srđan; Franović, Igor; Perc, Matjaž; Vasović, Nebojša; Todorović, Kristina

    2014-06-23

    The study is focused on the effect of transient external force induced by a passing seismic wave on fault motion in different creep regimes. Displacement along the fault is represented by the movement of a spring-block model, whereby the uniform and oscillatory motion correspond to the fault dynamics in post-seismic and inter-seismic creep regime, respectively. The effect of the external force is introduced as a change of block acceleration in the form of a sine wave scaled by an exponential pulse. Model dynamics is examined for variable parameters of the induced acceleration changes in reference to periodic oscillations of the unperturbed system above the supercritical Hopf bifurcation curve. The analysis indicates the occurrence of weak irregular oscillations if external force acts in the post-seismic creep regime. When fault motion is exposed to external force in the inter-seismic creep regime, one finds the transition to quasiperiodic- or chaos-like motion, which we attribute to the precursory creep regime and seismic motion, respectively. If the triggered acceleration changes are of longer duration, a reverse transition from inter-seismic to post-seismic creep regime is detected on a larger time scale.

  6. Modeling earthquake magnitudes from injection-induced seismicity on rough faults

    Science.gov (United States)

    Maurer, J.; Dunham, E. M.; Segall, P.

    2017-12-01

    It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.

  7. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    Science.gov (United States)

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. FAULT DIAGNOSIS OF AN AIRCRAFT CONTROL SURFACES WITH AN AUTOMATED CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    Blessing D. Ogunvoul

    2017-01-01

    Full Text Available This article is devoted to studying of fault diagnosis of an aircraft control surfaces using fault models to identify specific causes. Such failures as jamming, vibration, extreme run out and performance decrease are covered.It is proved that in case of an actuator failure or flight control structural damage, the aircraft performance decreases significantly. Commercial aircraft frequently appear in the areas of military conflicts and terrorist activity, where the risk of shooting attack is high, for example in Syria, Iraq, South Sudan etc. Accordingly, it is necessary to create and assess the fault model to identify the flight control failures.The research results demonstrate that the adequate fault model is the first step towards the managing the challenges of loss of aircraft controllability. This model is also an element of adaptive failure-resistant management model.The research considers the relationship between the parameters of an i th state of a control surface and its angular rate, also parameters classification associated with specific control surfaces in order to avoid conflict/inconsistency in the determination of a faulty control surface and its condition.The results of the method obtained in this article can be used in the design of an aircraft automated control system for timely identification of fault/failure of a specific control surface, that would contribute to an effective role aimed at increasing the survivability of an aircraft and increasing the acceptable level of safety due to loss of control.

  9. Modeling of periodic great earthquakes on the San Andreas fault: Effects of nonlinear crustal rheology

    Science.gov (United States)

    Reches, Ze'ev; Schubert, Gerald; Anderson, Charles

    1994-01-01

    We analyze the cycle of great earthquakes along the San Andreas fault with a finite element numerical model of deformation in a crust with a nonlinear viscoelastic rheology. The viscous component of deformation has an effective viscosity that depends exponentially on the inverse absolute temperature and nonlinearity on the shear stress; the elastic deformation is linear. Crustal thickness and temperature are constrained by seismic and heat flow data for California. The models are for anti plane strain in a 25-km-thick crustal layer having a very long, vertical strike-slip fault; the crustal block extends 250 km to either side of the fault. During the earthquake cycle that lasts 160 years, a constant plate velocity v(sub p)/2 = 17.5 mm yr is applied to the base of the crust and to the vertical end of the crustal block 250 km away from the fault. The upper half of the fault is locked during the interseismic period, while its lower half slips at the constant plate velocity. The locked part of the fault is moved abruptly 2.8 m every 160 years to simulate great earthquakes. The results are sensitive to crustal rheology. Models with quartzite-like rheology display profound transient stages in the velocity, displacement, and stress fields. The predicted transient zone extends about 3-4 times the crustal thickness on each side of the fault, significantly wider than the zone of deformation in elastic models. Models with diabase-like rheology behave similarly to elastic models and exhibit no transient stages. The model predictions are compared with geodetic observations of fault-parallel velocities in northern and central California and local rates of shear strain along the San Andreas fault. The observations are best fit by models which are 10-100 times less viscous than a quartzite-like rheology. Since the lower crust in California is composed of intermediate to mafic rocks, the present result suggests that the in situ viscosity of the crustal rock is orders of magnitude

  10. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  11. Fault Management Guiding Principles

    Science.gov (United States)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  12. ISHM-oriented adaptive fault diagnostics for avionics based on a distributed intelligent agent system

    Science.gov (United States)

    Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei

    2015-10-01

    In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.

  13. Developing a Fault Management Guidebook for Nasa's Deep Space Robotic Missions

    Science.gov (United States)

    Fesq, Lorraine M.; Jacome, Raquel Weitl

    2015-01-01

    NASA designs and builds systems that achieve incredibly ambitious goals, as evidenced by the Curiosity rover traversing on Mars, the highly complex International Space Station orbiting our Earth, and the compelling plans for capturing, retrieving and redirecting an asteroid into a lunar orbit to create a nearby a target to be investigated by astronauts. In order to accomplish these feats, the missions must be imbued with sufficient knowledge and capability not only to realize the goals, but also to identify and respond to off-nominal conditions. Fault Management (FM) is the discipline of establishing how a system will respond to preserve its ability to function even in the presence of faults. In 2012, NASA released a draft FM Handbook in an attempt to coalesce the field by establishing a unified terminology and a common process for designing FM mechanisms. However, FM approaches are very diverse across NASA, especially between the different mission types such as Earth orbiters, launch vehicles, deep space robotic vehicles and human spaceflight missions, and the authors were challenged to capture and represent all of these views. The authors recognized that a necessary precursor step is for each sub-community to codify its FM policies, practices and approaches in individual, focused guidebooks. Then, the sub-communities can look across NASA to better understand the different ways off-nominal conditions are addressed, and to seek commonality or at least an understanding of the multitude of FM approaches. This paper describes the development of the "Deep Space Robotic Fault Management Guidebook," which is intended to be the first of NASA's FM guidebooks. Its purpose is to be a field-guide for FM practitioners working on deep space robotic missions, as well as a planning tool for project managers. Publication of this Deep Space Robotic FM Guidebook is expected in early 2015. The guidebook will be posted on NASA's Engineering Network on the FM Community of Practice

  14. Fault Tolerance Assistant (FTA): An Exception Handling Programming Model for MPI Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Aiman [Univ. of Chicago, IL (United States). Dept. of Computer Science; Laguna, Ignacio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sato, Kento [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Islam, Tanzima [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-23

    Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enables failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.

  15. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Muhammad Sohaib

    2017-12-01

    Full Text Available Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE-based deep neural networks (DNNs to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs and backpropagation neural networks (BPNNs.

  16. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis.

    Science.gov (United States)

    Sohaib, Muhammad; Kim, Cheol-Hong; Kim, Jong-Myon

    2017-12-11

    Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE)-based deep neural networks (DNNs) to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs) and backpropagation neural networks (BPNNs).

  17. Artificial neural network application for space station power system fault diagnosis

    Science.gov (United States)

    Momoh, James A.; Oliver, Walter E.; Dias, Lakshman G.

    1995-01-01

    This study presents a methodology for fault diagnosis using a Two-Stage Artificial Neural Network Clustering Algorithm. Previously, SPICE models of a 5-bus DC power distribution system with assumed constant output power during contingencies from the DDCU were used to evaluate the ANN's fault diagnosis capabilities. This on-going study uses EMTP models of the components (distribution lines, SPDU, TPDU, loads) and power sources (DDCU) of Space Station Alpha's electrical Power Distribution System as a basis for the ANN fault diagnostic tool. The results from the two studies are contrasted. In the event of a major fault, ground controllers need the ability to identify the type of fault, isolate the fault to the orbital replaceable unit level and provide the necessary information for the power management expert system to optimally determine a degraded-mode load schedule. To accomplish these goals, the electrical power distribution system's architecture can be subdivided into three major classes: DC-DC converter to loads, DC Switching Unit (DCSU) to Main bus Switching Unit (MBSU), and Power Sources to DCSU. Each class which has its own electrical characteristics and operations, requires a unique fault analysis philosophy. This study identifies these philosophies as Riddles 1, 2 and 3 respectively. The results of the on-going study addresses Riddle-1. It is concluded in this study that the combination of the EMTP models of the DDCU, distribution cables and electrical loads yields a more accurate model of the behavior and in addition yielded more accurate fault diagnosis using ANN versus the results obtained with the SPICE models.

  18. Model based fault diagnosis in a centrifugal pump application using structural analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  19. Model Based Fault Diagnosis in a Centrifugal Pump Application using Structural Analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  20. Model-based fault diagnosis approach on external short circuit of lithium-ion battery used in electric vehicles

    International Nuclear Information System (INIS)

    Chen, Zeyu; Xiong, Rui; Tian, Jinpeng; Shang, Xiong; Lu, Jiahuan

    2016-01-01

    Highlights: • The characteristics of ESC fault of lithium-ion battery are investigated experimentally. • The proposed method to simulate the electrical behavior of ESC fault is viable. • Ten parameters in the presented fault model were optimized using a DPSO algorithm. • A two-layer model-based fault diagnosis approach for battery ESC is proposed. • The effective and robustness of the proposed algorithm has been evaluated. - Abstract: This study investigates the external short circuit (ESC) fault characteristics of lithium-ion battery experimentally. An experiment platform is established and the ESC tests are implemented on ten 18650-type lithium cells considering different state-of-charges (SOCs). Based on the experiment results, several efforts have been made. (1) The ESC process can be divided into two periods and the electrical and thermal behaviors within these two periods are analyzed. (2) A modified first-order RC model is employed to simulate the electrical behavior of the lithium cell in the ESC fault process. The model parameters are re-identified by a dynamic-neighborhood particle swarm optimization algorithm. (3) A two-layer model-based ESC fault diagnosis algorithm is proposed. The first layer conducts preliminary fault detection and the second layer gives a precise model-based diagnosis. Four new cells are short-circuited to evaluate the proposed algorithm. It shows that the ESC fault can be diagnosed within 5 s, the error between the model and measured data is less than 0.36 V. The effectiveness of the fault diagnosis algorithm is not sensitive to the precision of battery SOC. The proposed algorithm can still make the correct diagnosis even if there is 10% error in SOC estimation.

  1. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2010-02-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  2. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  3. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  4. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  5. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  6. Three-dimensional numerical modeling of the influence of faults on groundwater flow at Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Andrew J.B. [Univ. of California, Berkeley, CA (United States)

    1999-06-01

    Numerical simulations of groundwater flow at Yucca Mountain, Nevada are used to investigate how the faulted hydrogeologic structure influences groundwater flow from a proposed high-level nuclear waste repository. Simulations are performed using a 3-D model that has a unique grid block discretization to accurately represent the faulted geologic units, which have variable thicknesses and orientations. Irregular grid blocks enable explicit representation of these features. Each hydrogeologic layer is discretized into a single layer of irregular and dipping grid blocks, and faults are discretized such that they are laterally continuous and displacement varies along strike. In addition, the presence of altered fault zones is explicitly modeled, as appropriate. The model has 23 layers and 11 faults, and approximately 57,000 grid blocks and 200,000 grid block connections. In the past, field measurement of upward vertical head gradients and high water table temperatures near faults were interpreted as indicators of upwelling from a deep carbonate aquifer. Simulations show, however, that these features can be readily explained by the geometry of hydrogeologic layers, the variability of layer permeabilities and thermal conductivities, and by the presence of permeable fault zones or faults with displacement only. In addition, a moderate water table gradient can result from fault displacement or a laterally continuous low permeability fault zone, but not from a high permeability fault zone, as others postulated earlier. Large-scale macrodispersion results from the vertical and lateral diversion of flow near the contact of high and low permeability layers at faults, and from upward flow within high permeability fault zones. Conversely, large-scale channeling can occur due to groundwater flow into areas with minimal fault displacement. Contaminants originating at the water table can flow in a direction significantly different than that of the water table gradient, and isolated

  7. Three-dimensional numerical modeling of the influence of faults on groundwater flow at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Cohen, Andrew J.B.

    1999-01-01

    Numerical simulations of groundwater flow at Yucca Mountain, Nevada are used to investigate how the faulted hydrogeologic structure influences groundwater flow from a proposed high-level nuclear waste repository. Simulations are performed using a 3-D model that has a unique grid block discretization to accurately represent the faulted geologic units, which have variable thicknesses and orientations. Irregular grid blocks enable explicit representation of these features. Each hydrogeologic layer is discretized into a single layer of irregular and dipping grid blocks, and faults are discretized such that they are laterally continuous and displacement varies along strike. In addition, the presence of altered fault zones is explicitly modeled, as appropriate. The model has 23 layers and 11 faults, and approximately 57,000 grid blocks and 200,000 grid block connections. In the past, field measurement of upward vertical head gradients and high water table temperatures near faults were interpreted as indicators of upwelling from a deep carbonate aquifer. Simulations show, however, that these features can be readily explained by the geometry of hydrogeologic layers, the variability of layer permeabilities and thermal conductivities, and by the presence of permeable fault zones or faults with displacement only. In addition, a moderate water table gradient can result from fault displacement or a laterally continuous low permeability fault zone, but not from a high permeability fault zone, as others postulated earlier. Large-scale macrodispersion results from the vertical and lateral diversion of flow near the contact of high and low permeability layers at faults, and from upward flow within high permeability fault zones. Conversely, large-scale channeling can occur due to groundwater flow into areas with minimal fault displacement. Contaminants originating at the water table can flow in a direction significantly different than that of the water table gradient, and isolated

  8. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  9. Model-Based Fault Detection and Isolation of a Liquid-Cooled Frequency Converter on a Wind Turbine

    DEFF Research Database (Denmark)

    Li, Peng; Odgaard, Peter Fogh; Stoustrup, Jakob

    2012-01-01

    advanced fault detection and isolation schemes. In this paper, an observer-based fault detection and isolation method for the cooling system in a liquid-cooled frequency converter on a wind turbine which is built up in a scalar version in the laboratory is presented. A dynamic model of the scale cooling...... system is derived based on energy balance equation. A fault analysis is conducted to determine the severity and occurrence rate of possible component faults and their end effects in the cooling system. A method using unknown input observer is developed in order to detect and isolate the faults based...... on the developed dynamical model. The designed fault detection and isolation algorithm is applied on a set of measured experiment data in which different faults are artificially introduced to the scaled cooling system. The experimental results conclude that the different faults are successfully detected...

  10. Data-Reconciliation Based Fault-Tolerant Model Predictive Control for a Biomass Boiler

    Directory of Open Access Journals (Sweden)

    Palash Sarkar

    2017-02-01

    Full Text Available This paper presents a novel, effective method to handle critical sensor faults affecting a control system devised to operate a biomass boiler. In particular, the proposed method consists of integrating a data reconciliation algorithm in a model predictive control loop, so as to annihilate the effects of faults occurring in the sensor of the flue gas oxygen concentration, by feeding the controller with the reconciled measurements. Indeed, the oxygen content in flue gas is a key variable in control of biomass boilers due its close connections with both combustion efficiency and polluting emissions. The main benefit of including the data reconciliation algorithm in the loop, as a fault tolerant component, with respect to applying standard fault tolerant methods, is that controller reconfiguration is not required anymore, since the original controller operates on the restored, reliable data. The integrated data reconciliation–model predictive control (MPC strategy has been validated by running simulations on a specific type of biomass boiler—the KPA Unicon BioGrate boiler.

  11. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    Science.gov (United States)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress

  12. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  13. Analysis of Fault Permeability Using Mapping and Flow Modeling, Hickory Sandstone Aquifer, Central Texas

    Energy Technology Data Exchange (ETDEWEB)

    Nieto Camargo, Jorge E., E-mail: jorge.nietocamargo@aramco.com; Jensen, Jerry L., E-mail: jjensen@ucalgary.ca [University of Calgary, Department of Chemical and Petroleum Engineering (Canada)

    2012-09-15

    Reservoir compartments, typical targets for infill well locations, are commonly created by faults that may reduce permeability. A narrow fault may consist of a complex assemblage of deformation elements that result in spatially variable and anisotropic permeabilities. We report on the permeability structure of a km-scale fault sampled through drilling a faulted siliciclastic aquifer in central Texas. Probe and whole-core permeabilities, serial CAT scans, and textural and structural data from the selected core samples are used to understand permeability structure of fault zones and develop predictive models of fault zone permeability. Using numerical flow simulation, it is possible to predict permeability anisotropy associated with faults and evaluate the effect of individual deformation elements in the overall permeability tensor. We found relationships between the permeability of the host rock and those of the highly deformed (HD) fault-elements according to the fault throw. The lateral continuity and predictable permeability of the HD fault elements enhance capability for estimating the effects of subseismic faulting on fluid flow in low-shale reservoirs.

  14. Scalable Fault-Tolerant Location Management Scheme for Mobile IP

    Directory of Open Access Journals (Sweden)

    JinHo Ahn

    2001-11-01

    Full Text Available As the number of mobile nodes registering with a network rapidly increases in Mobile IP, multiple mobility (home of foreign agents can be allocated to a network in order to improve performance and availability. Previous fault tolerant schemes (denoted by PRT schemes to mask failures of the mobility agents use passive replication techniques. However, they result in high failure-free latency during registration process if the number of mobility agents in the same network increases, and force each mobility agent to manage bindings of all the mobile nodes registering with its network. In this paper, we present a new fault-tolerant scheme (denoted by CML scheme using checkpointing and message logging techniques. The CML scheme achieves low failure-free latency even if the number of mobility agents in a network increases, and improves scalability to a large number of mobile nodes registering with each network compared with the PRT schemes. Additionally, the CML scheme allows each failed mobility agent to recover bindings of the mobile nodes registering with the mobility agent when it is repaired even if all the other mobility agents in the same network concurrently fail.

  15. Improving fault image by determination of optimum seismic survey parameters using ray-based modeling

    Science.gov (United States)

    Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali

    2018-06-01

    In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.

  16. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  17. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    Science.gov (United States)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  18. Guaranteed Cost Fault-Tolerant Control for Networked Control Systems with Sensor Faults

    Directory of Open Access Journals (Sweden)

    Qixin Zhu

    2015-01-01

    Full Text Available For the large scale and complicated structure of networked control systems, time-varying sensor faults could inevitably occur when the system works in a poor environment. Guaranteed cost fault-tolerant controller for the new networked control systems with time-varying sensor faults is designed in this paper. Based on time delay of the network transmission environment, the networked control systems with sensor faults are modeled as a discrete-time system with uncertain parameters. And the model of networked control systems is related to the boundary values of the sensor faults. Moreover, using Lyapunov stability theory and linear matrix inequalities (LMI approach, the guaranteed cost fault-tolerant controller is verified to render such networked control systems asymptotically stable. Finally, simulations are included to demonstrate the theoretical results.

  19. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  20. Stresses in faulted tunnel models by photoelasticity and adaptive finite element

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Huang, Y.

    1995-01-01

    Research efforts in this area continue to investigate the development of a proper technique to analyze the stresses in the Ghost Dance fault and the effect of the fault on the stability of drifts in the proposed repository. Results from two parallel techniques are being compared to each other - Photoelastic models and Finite Element (FE) models. The Photoelastic plexiglass model (88.89 mm thick ampersand 256.1 mm long and wide) has two adjacent spare openings (57.95 mm long and wide) and a central round opening (57.95 mm diameter) placed at a clear distance approximately equal to its diameter from the square openings. The vertical loading on top of the model is 2269 N (500 lb.). Saw cuts (0.5388 mm wide), representing a fault, are being propagated from the tunnels outward with stress measurements taken at predefined locations, as the saw cuts increase in length. The FE model duplicates exactly the Photoelastic models. The adaptive mesh generation method is used to refine the FE grid at every step of the analysis. This nonlinear interactive computational techniques uses various uses various percent tolerance errors in the convergence of stress values as a measure in ending the iterative process

  1. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    Science.gov (United States)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  2. Application of improved degree of grey incidence analysis model in fault diagnosis of steam generator

    International Nuclear Information System (INIS)

    Zhao Xinwen; Ren Xin

    2014-01-01

    In order to further reduce the misoperation after the faults occurring of nuclear-powered system in marine, the model based on weighted degree of grey incidence of optimized entropy and fault diagnosis system are proposed, and some simulation experiments about the typical faults of steam generator of nuclear-powered system in marine are conducted. And the results show that the diagnosis system based on improved degree of grey incidence model is more stable and its conclusion is right, and can satisfy diagnosis in real time, and higher faults subjection degrees resolving power can be achieved. (authors)

  3. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  4. Fault diagnosis of locomotive electro-pneumatic brake through uncertain bond graph modeling and robust online monitoring

    Science.gov (United States)

    Niu, Gang; Zhao, Yajun; Defoort, Michael; Pecht, Michael

    2015-01-01

    To improve reliability, safety and efficiency, advanced methods of fault detection and diagnosis become increasingly important for many technical fields, especially for safety related complex systems like aircraft, trains, automobiles, power plants and chemical plants. This paper presents a robust fault detection and diagnostic scheme for a multi-energy domain system that integrates a model-based strategy for system fault modeling and a data-driven approach for online anomaly monitoring. The developed scheme uses LFT (linear fractional transformations)-based bond graph for physical parameter uncertainty modeling and fault simulation, and employs AAKR (auto-associative kernel regression)-based empirical estimation followed by SPRT (sequential probability ratio test)-based threshold monitoring to improve the accuracy of fault detection. Moreover, pre- and post-denoising processes are applied to eliminate the cumulative influence of parameter uncertainty and measurement uncertainty. The scheme is demonstrated on the main unit of a locomotive electro-pneumatic brake in a simulated experiment. The results show robust fault detection and diagnostic performance.

  5. Exploring tectonomagmatic controls on mid-ocean ridge faulting and morphology with 3-D numerical models

    Science.gov (United States)

    Howell, S. M.; Ito, G.; Behn, M. D.; Olive, J. A. L.; Kaus, B.; Popov, A.; Mittelstaedt, E. L.; Morrow, T. A.

    2016-12-01

    Previous two-dimensional (2-D) modeling studies of abyssal-hill scale fault generation and evolution at mid-ocean ridges have predicted that M, the ratio of magmatic to total extension, strongly influences the total slip, spacing, and rotation of large faults, as well as the morphology of the ridge axis. Scaling relations derived from these 2-D models broadly explain the globally observed decrease in abyssal hill spacing with increasing ridge spreading rate, as well as the formation of large-offset faults close to the ends of slow-spreading ridge segments. However, these scaling relations do not explain some higher resolution observations of segment-scale variability in fault spacing along the Chile Ridge and the Mid-Atlantic Ridge, where fault spacing shows no obvious correlation with M. This discrepancy between observations and 2-D model predictions illuminates the need for three-dimensional (3-D) numerical models that incorporate the effects of along-axis variations in lithospheric structure and magmatic accretion. To this end, we use the geodynamic modeling software LaMEM to simulate 3-D tectono-magmatic interactions in a visco-elasto-plastic lithosphere under extension. We model a single ridge segment subjected to an along-axis gradient in the rate of magma injection, which is simulated by imposing a mass source in a plane of model finite volumes beneath the ridge axis. Outputs of interest include characteristic fault offset, spacing, and along-axis gradients in seafloor morphology. We also examine the effects of along-axis variations in lithospheric thickness and off-axis thickening rate. The main objectives of this study are to quantify the relative importance of the amount of magmatic extension and the local lithospheric structure at a given along-axis location, versus the importance of along-axis communication of lithospheric stresses on the 3-D fault evolution and morphology of intermediate-spreading-rate ridges.

  6. Degradation Assessment and Fault Diagnosis for Roller Bearing Based on AR Model and Fuzzy Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Lingli Jiang

    2011-01-01

    Full Text Available This paper proposes a new approach combining autoregressive (AR model and fuzzy cluster analysis for bearing fault diagnosis and degradation assessment. AR model is an effective approach to extract the fault feature, and is generally applied to stationary signals. However, the fault vibration signals of a roller bearing are non-stationary and non-Gaussian. Aiming at this problem, the set of parameters of the AR model is estimated based on higher-order cumulants. Consequently, the AR parameters are taken as the feature vectors, and fuzzy cluster analysis is applied to perform classification and pattern recognition. Experiments analysis results show that the proposed method can be used to identify various types and severities of fault bearings. This study is significant for non-stationary and non-Gaussian signal analysis, fault diagnosis and degradation assessment.

  7. Wilshire fault: Earthquakes in Hollywood?

    Science.gov (United States)

    Hummon, Cheryl; Schneider, Craig L.; Yeats, Robert S.; Dolan, James F.; Sieh, Kerry E.; Huftile, Gary J.

    1994-04-01

    The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

  8. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2017-01-01

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  9. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi

    2017-12-14

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  10. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  11. Width and dip of the southern San Andreas Fault at Salt Creek from modeling of geophysical data

    Science.gov (United States)

    Langenheim, Victoria; Athens, Noah D.; Scheirer, Daniel S.; Fuis, Gary S.; Rymer, Michael J.; Goldman, Mark R.; Reynolds, Robert E.

    2014-01-01

    We investigate the geometry and width of the southernmost stretch of the San Andreas Fault zone using new gravity and magnetic data along line 7 of the Salton Seismic Imaging Project. In the Salt Creek area of Durmid Hill, the San Andreas Fault coincides with a complex magnetic signature, with high-amplitude, short-wavelength magnetic anomalies superposed on a broader magnetic anomaly that is at least 5 km wide centered 2–3 km northeast of the fault. Marine magnetic data show that high-frequency magnetic anomalies extend more than 1 km west of the mapped trace of the San Andreas Fault. Modeling of magnetic data is consistent with a moderate to steep (> 50 degrees) northeast dip of the San Andreas Fault, but also suggests that the sedimentary sequence is folded west of the fault, causing the short wavelength of the anomalies west of the fault. Gravity anomalies are consistent with the previously modeled seismic velocity structure across the San Andreas Fault. Modeling of gravity data indicates a steep dip for the San Andreas Fault, but does not resolve unequivocally the direction of dip. Gravity data define a deeper basin, bounded by the Powerline and Hot Springs Faults, than imaged by the seismic experiment. This basin extends southeast of Line 7 for nearly 20 km, with linear margins parallel to the San Andreas Fault. These data suggest that the San Andreas Fault zone is wider than indicated by its mapped surface trace.

  12. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    Science.gov (United States)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  13. Fault detection in IRIS reactor secondary loop using inferential models

    International Nuclear Information System (INIS)

    Perillo, Sergio R.P.; Upadhyaya, Belle R.; Hines, J. Wesley

    2013-01-01

    The development of fault detection algorithms is well-suited for remote deployment of small and medium reactors, such as the IRIS, and the development of new small modular reactors (SMR). However, an extensive number of tests are still to be performed for new engineering aspects and components that are not yet proven technology in the current PWRs, and present some technological challenges for its deployment since many of its features cannot be proven until a prototype plant is built. In this work, an IRIS plant simulation platform was developed using a Simulink® model. The dynamic simulation was utilized in obtaining inferential models that were used to detect faults artificially added to the secondary system simulations. The implementation of data-driven models and the results are discussed. (author)

  14. A Fault Diagnosis Model of Surface to Air Missile Equipment Based on Wavelet Transformation and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Zhheng Ni

    2016-01-01

    Full Text Available At present, the fault signals of surface to air missile equipment are hard to collect and the accuracy of fault diagnosis is very low. To solve the above problems, based on the superiority of wavelet transformation on processing non-stationary signals and the advantage of SVM on pattern classification, this paper proposes a fault diagnosis model and takes the typical analog circuit diagnosis of one power distribution system as an example to verify the fault diagnosis model based on Wavelet Transformation and SVM. The simulation results show that the model is able to achieve fault diagnosis based on a small amount of training samples, which improves the accuracy of fault diagnosis.

  15. San Onofre/Zion auxiliary feedwater system seismic fault tree modeling

    International Nuclear Information System (INIS)

    Najafi, B.; Eide, S.

    1982-02-01

    As part of the study for the seismic evaluation of the San Onofre Unit 1 Auxiliary Feedwater System (AFWS), a fault tree model was developed capable of handling the effect of structural failure of the plant (in the event of an earthquake) on the availability of the AFWS. A compatible fault tree model was developed for the Zion Unit 1 AFWS in order to compare the results of the two systems. It was concluded that if a single failure of the San Onofre Unit 1 AFWS is to be prevented, some weight existing, locally operated locked open manual valves have to be used for isolation of a rupture in specific parts of the AFWS pipings

  16. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  17. Robust Mpc for Actuator–Fault Tolerance Using Set–Based Passive Fault Detection and Active Fault Isolation

    Directory of Open Access Journals (Sweden)

    Xu Feng

    2017-03-01

    Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

  18. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Science.gov (United States)

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  19. Dynamic rupture models of earthquakes on the Bartlett Springs Fault, Northern California

    Science.gov (United States)

    Lozos, Julian C.; Harris, Ruth A.; Murray, Jessica R.; Lienkaemper, James J.

    2015-01-01

    The Bartlett Springs Fault (BSF), the easternmost branch of the northern San Andreas Fault system, creeps along much of its length. Geodetic data for the BSF are sparse, and surface creep rates are generally poorly constrained. The two existing geodetic slip rate inversions resolve at least one locked patch within the creeping zones. We use the 3-D finite element code FaultMod to conduct dynamic rupture models based on both geodetic inversions, in order to determine the ability of rupture to propagate into the creeping regions, as well as to assess possible magnitudes for BSF ruptures. For both sets of models, we find that the distribution of aseismic creep limits the extent of coseismic rupture, due to the contrast in frictional properties between the locked and creeping regions.

  20. A Study on Landslide Risk Management by Applying Fault Tree Logics

    Directory of Open Access Journals (Sweden)

    Kazmi Danish

    2017-01-01

    Full Text Available Slope stability is one of the focal areas of curiosity to geotechnical designers and also appears logical for the application of probabilistic approaches since the analysis lead to a “probability of failure”. Assessment of the existing slopes in relation with risks seems to be more meaningful when concerning with landslides. Probabilistic slope stability analysis (PSSA is the best option in covering the landslides events. The intent here is to bid a probabilistic framework for quantified risk analysis with human uncertainties. In this regard, Fault Tree Analysis is utilized and for prediction of risk levels, consequences of the failures of the reference landslides have been taken. It is concluded that logics of fault trees is best fit, to clinch additional categories of uncertainty; like human, organizational, and knowledge related. In actual, the approach has been used in bringing together engineering and management performances and personnel, to produce reliability in slope engineering practices.

  1. Disease management programmes in Germany: a fundamental fault.

    Science.gov (United States)

    Felder, Stefan

    2006-12-01

    In 2001 Germany introduced disease management programmes (DMPs) in order to give sick funds an incentive to improve the treatment of the chronically ill. By 1 March 2005, a total of 3275 programmes had been approved, 2760 for diabetes, 390 for breast cancer and 125 for coronary heart disease, covering roughly 1 million patients. German DMPs show a major fault regarding financial incentives. Sick funds increase their transfers from the risk adjustment scheme when their clients enroll in DMPs. Since this money is a lump sum, sick funds do not necessarily foster treatment of the chronically ill. Similarly, reimbursement of physicians is also not well targeted to the needs of DMPs. Preliminary evidence points to poor performance of German DMPs.

  2. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  3. Modelling the Replication Management in Information Systems

    Directory of Open Access Journals (Sweden)

    Cezar TOADER

    2017-01-01

    Full Text Available In the modern economy, the benefits of Web services are significant because they facilitates the activities automation in the framework of Internet distributed businesses as well as the cooperation between organizations through interconnection process running in the computer systems. This paper presents the development stages of a model for a reliable information system. This paper describes the communication between the processes within the distributed system, based on the message exchange, and also presents the problem of distributed agreement among processes. A list of objectives for the fault-tolerant systems is defined and a framework model for distributed systems is proposed. This framework makes distinction between management operations and execution operations. The proposed model promotes the use of a central process especially designed for the coordination and control of other application processes. The execution phases and the protocols for the management and the execution components are presented. This model of a reliable system could be a foundation for an entire class of distributed systems models based on the management of replication process.

  4. One-dimensional modeling of thermal energy produced in a seismic fault

    Science.gov (United States)

    Konga, Guy Pascal; Koumetio, Fidèle; Yemele, David; Olivier Djiogang, Francis

    2017-12-01

    Generally, one observes an anomaly of temperature before a big earthquake. In this paper, we established the expression of thermal energy produced by friction forces between the walls of a seismic fault while considering the dynamic of a one-dimensional spring-block model. It is noted that, before the rupture of a seismic fault, displacements are caused by microseisms. The curves of variation of this thermal energy with time show that, for oscillatory and aperiodic displacement, the thermal energy is accumulated in the same way. The study reveals that thermal energy as well as temperature increases abruptly after a certain amount of time. We suggest that the corresponding time is the start of the anomaly of temperature observed which can be considered as precursory effect of a big seism. We suggest that the thermal energy can heat gases and dilate rocks until they crack. The warm gases can then pass through the cracks towards the surface. The cracks created by thermal energy can also contribute to the rupture of the seismic fault. We also suggest that the theoretical model of thermal energy, produced in seismic fault, associated with a large quantity of experimental data may help in the prediction of earthquakes.

  5. Diagnosis and Fault-tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Kinnaert, Michel; Lunze, Jan

    the applicability of the presented methods. The theoretical results are illustrated by two running examples which are used throughout the book. The book addresses engineering students, engineers in industry and researchers who wish to get a survey over the variety of approaches to process diagnosis and fault......The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process...

  6. Fault tolerant operation of switched reluctance machine

    Science.gov (United States)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  7. Fault-tolerant reference generation for model predictive control with active diagnosis of elevator jamming faults

    NARCIS (Netherlands)

    Ferranti, L.; Wan, Y.; Keviczky, T.

    2018-01-01

    This paper focuses on the longitudinal control of an Airbus passenger aircraft in the presence of elevator jamming faults. In particular, in this paper, we address permanent and temporary actuator jamming faults using a novel reconfigurable fault-tolerant predictive control design. Due to their

  8. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  9. Fault detection in processes represented by PLS models using an EWMA control scheme

    KAUST Repository

    Harrou, Fouzi

    2016-10-20

    Fault detection is important for effective and safe process operation. Partial least squares (PLS) has been used successfully in fault detection for multivariate processes with highly correlated variables. However, the conventional PLS-based detection metrics, such as the Hotelling\\'s T and the Q statistics are not well suited to detect small faults because they only use information about the process in the most recent observation. Exponentially weighed moving average (EWMA), however, has been shown to be more sensitive to small shifts in the mean of process variables. In this paper, a PLS-based EWMA fault detection method is proposed for monitoring processes represented by PLS models. The performance of the proposed method is compared with that of the traditional PLS-based fault detection method through a simulated example involving various fault scenarios that could be encountered in real processes. The simulation results clearly show the effectiveness of the proposed method over the conventional PLS method.

  10. Event and fault tree model for reliability analysis of the greek research reactor

    International Nuclear Information System (INIS)

    Albuquerque, Tob R.; Guimaraes, Antonio C.F.; Moreira, Maria de Lourdes

    2013-01-01

    Fault trees and event trees are widely used in industry to model and to evaluate the reliability of safety systems. Detailed analyzes in nuclear installations require the combination of these two techniques. This work uses the methods of fault tree (FT) and event tree (ET) to perform the Probabilistic Safety Assessment (PSA) in research reactors. The PSA according to IAEA (International Atomic Energy Agency) is divided into Level 1, Level 2 and level 3. At Level 1, conceptually safety systems act to prevent the accident, at Level 2, the accident occurred and seeks to minimize the consequences, known as stage management of the accident, and at Level 3 are determined consequences. This paper focuses on Level 1 studies, and searches through the acquisition of knowledge consolidation of methodologies for future reliability studies. The Greek Research Reactor, GRR - 1, was used as a case example. The LOCA (Loss of Coolant Accident) was chosen as the initiating event and from there were developed the possible accident sequences, using event tree, which could lead damage to the core. Furthermore, for each of the affected systems, the possible accidents sequences were made fault tree and evaluated the probability of each event top of the FT. The studies were conducted using a commercial computational tool SAPHIRE. The results thus obtained, performance or failure to act of the systems analyzed were considered satisfactory. This work is directed to the Greek Research Reactor due to data availability. (author)

  11. Event and fault tree model for reliability analysis of the greek research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Albuquerque, Tob R.; Guimaraes, Antonio C.F.; Moreira, Maria de Lourdes, E-mail: atalbuquerque@ien.gov.br, E-mail: btony@ien.gov.br, E-mail: malu@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Fault trees and event trees are widely used in industry to model and to evaluate the reliability of safety systems. Detailed analyzes in nuclear installations require the combination of these two techniques. This work uses the methods of fault tree (FT) and event tree (ET) to perform the Probabilistic Safety Assessment (PSA) in research reactors. The PSA according to IAEA (International Atomic Energy Agency) is divided into Level 1, Level 2 and level 3. At Level 1, conceptually safety systems act to prevent the accident, at Level 2, the accident occurred and seeks to minimize the consequences, known as stage management of the accident, and at Level 3 are determined consequences. This paper focuses on Level 1 studies, and searches through the acquisition of knowledge consolidation of methodologies for future reliability studies. The Greek Research Reactor, GRR - 1, was used as a case example. The LOCA (Loss of Coolant Accident) was chosen as the initiating event and from there were developed the possible accident sequences, using event tree, which could lead damage to the core. Furthermore, for each of the affected systems, the possible accidents sequences were made fault tree and evaluated the probability of each event top of the FT. The studies were conducted using a commercial computational tool SAPHIRE. The results thus obtained, performance or failure to act of the systems analyzed were considered satisfactory. This work is directed to the Greek Research Reactor due to data availability. (author)

  12. Advanced cloud fault tolerance system

    Science.gov (United States)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  13. Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators

    Directory of Open Access Journals (Sweden)

    Ilias Ouachtouk

    2016-01-01

    Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.

  14. Crustal Density Variation Along the San Andreas Fault Controls Its Secondary Faults Distribution and Dip Direction

    Science.gov (United States)

    Yang, H.; Moresi, L. N.

    2017-12-01

    The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient 200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.

  15. A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne

    2018-02-01

    The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.

  16. Economic modeling of fault tolerant flight control systems in commercial applications

    Science.gov (United States)

    Finelli, G. B.

    1982-01-01

    This paper describes the current development of a comprehensive model which will supply the assessment and analysis capability to investigate the economic viability of Fault Tolerant Flight Control Systems (FTFCS) for commercial aircraft of the 1990's and beyond. An introduction to the unique attributes of fault tolerance and how they will influence aircraft operations and consequent airline costs and benefits is presented. Specific modeling issues and elements necessary for accurate assessment of all costs affected by ownership and operation of FTFCS are delineated. Trade-off factors are presented, aimed at exposing economically optimal realizations of system implementations, resource allocation, and operating policies. A trade-off example is furnished to graphically display some of the analysis capabilities of the comprehensive simulation model now being developed.

  17. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    Science.gov (United States)

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  18. VEHIL: a test facility for validation of fault management systems for advanced driver assistance systems

    NARCIS (Netherlands)

    Gietelink, O.J.; Ploeg, J.; Schutter, de B.; Verhaegen, M.H.

    2004-01-01

    We present a methodological approach for the validation of fault management systems for Advanced Driver Assistance Systems (ADAS). For the validation process the unique VEHIL facility, developed by TNO Automotive and currently situated in Helmond, The Netherlands, is applied. The VEHIL facility

  19. Newport-Inglewood-Carlsbad-Coronado Bank Fault System Nearshore Southern California: Testing models for Quaternary deformation

    Science.gov (United States)

    Bennett, J. T.; Sorlien, C. C.; Cormier, M.; Bauer, R. L.

    2011-12-01

    The San Andreas fault system is distributed across hundreds of kilometers in southern California. This transform system includes offshore faults along the shelf, slope and basin- comprising part of the Inner California Continental Borderland. Previously, offshore faults have been interpreted as being discontinuous and striking parallel to the coast between Long Beach and San Diego. Our recent work, based on several thousand kilometers of deep-penetration industry multi-channel seismic reflection data (MCS) as well as high resolution U.S. Geological Survey MCS, indicates that many of the offshore faults are more geometrically continuous than previously reported. Stratigraphic interpretations of MCS profiles included the ca. 1.8 Ma Top Lower Pico, which was correlated from wells located offshore Long Beach (Sorlien et. al. 2010). Based on this age constraint, four younger (Late) Quaternary unconformities are interpreted through the slope and basin. The right-lateral Newport-Inglewood fault continues offshore near Newport Beach. We map a single fault for 25 kilometers that continues to the southeast along the base of the slope. There, the Newport-Inglewood fault splits into the San Mateo-Carlsbad fault, which is mapped for 55 kilometers along the base of the slope to a sharp bend. This bend is the northern end of a right step-over of 10 kilometers to the Descanso fault and about 17 km to the Coronado Bank fault. We map these faults for 50 kilometers as they continue over the Mexican border. Both the San Mateo - Carlsbad with the Newport-Inglewood fault and the Coronado Bank with the Descanso fault are paired faults that form flower structures (positive and negative, respectively) in cross section. Preliminary kinematic models indicate ~1km of right-lateral slip since ~1.8 Ma at the north end of the step-over. We are modeling the slip on the southern segment to test our hypothesis for a kinematically continuous right-lateral fault system. We are correlating four

  20. Predictive modelling of fault related fracturing in carbonate damage-zones: analytical and numerical models of field data (Central Apennines, Italy)

    Science.gov (United States)

    Mannino, Irene; Cianfarra, Paola; Salvini, Francesco

    2010-05-01

    Permeability in carbonates is strongly influenced by the presence of brittle deformation patterns, i.e pressure-solution surfaces, extensional fractures, and faults. Carbonate rocks achieve fracturing both during diagenesis and tectonic processes. Attitude, spatial distribution and connectivity of brittle deformation features rule the secondary permeability of carbonatic rocks and therefore the accumulation and the pathway of deep fluids (ground-water, hydrocarbon). This is particularly true in fault zones, where the damage zone and the fault core show different hydraulic properties from the pristine rock as well as between them. To improve the knowledge of fault architecture and faults hydraulic properties we study the brittle deformation patterns related to fault kinematics in carbonate successions. In particular we focussed on the damage-zone fracturing evolution. Fieldwork was performed in Meso-Cenozoic carbonate units of the Latium-Abruzzi Platform, Central Apennines, Italy. These units represent field analogues of rock reservoir in the Southern Apennines. We combine the study of rock physical characteristics of 22 faults and quantitative analyses of brittle deformation for the same faults, including bedding attitudes, fracturing type, attitudes, and spatial intensity distribution by using the dimension/spacing ratio, namely H/S ratio where H is the dimension of the fracture and S is the spacing between two analogous fractures of the same set. Statistical analyses of structural data (stereonets, contouring and H/S transect) were performed to infer a focussed, general algorithm that describes the expected intensity of fracturing process. The analytical model was fit to field measurements by a Montecarlo-convergent approach. This method proved a useful tool to quantify complex relations with a high number of variables. It creates a large sequence of possible solution parameters and results are compared with field data. For each item an error mean value is

  1. All Roads Lead to Fault Diagnosis : Model-Based Reasoning with LYDIA

    NARCIS (Netherlands)

    Feldman, A.B.; Pietersma, J.; Van Gemund, A.J.C.

    2006-01-01

    Model-Based Reasoning (MBR) over qualitative models of complex, real-world systems has proven succesful for automated fault diagnosis, control, and repair. Expressing a system under diagnosis in a formal model and infering a diagnosis given observations are both challenging problems. In this paper

  2. Modeling caprock fracture, CO2 migration and time dependent fault healing: A numerical study.

    Science.gov (United States)

    MacFarlane, J.; Mukerji, T.; Vanorio, T.

    2017-12-01

    The Campi Flegrei caldera, located near Naples, Italy, is one of the highest risk volcanoes on Earth due to its recent unrest and urban setting. A unique history of surface uplift within the caldera is characterized by long duration uplift and subsidence cycles which are periodically interrupted by rapid, short period uplift events. Several models have been proposed to explain this history; in this study we will present a hydro-mechanical model that takes into account the caprock that seismic studies show to exist at 1-2 km depth. Specifically, we develop a finite element model of the caldera and use a modified version of fault-valve theory to represent fracture within the caprock. The model accounts for fault healing using a simplified, time-dependent fault sealing model. Multiple fracture events are incorporated by using previous solutions to test prescribed conditions and determine changes in rock properties, such as porosity and permeability. Although fault-valve theory has been used to model single fractures and recharge, this model is unique in its ability to model multiple fracture events. By incorporating multiple fracture events we can assess changes in both long and short-term reservoir behavior at Campi Flegrei. By varying the model inputs, we model the poro-elastic response to CO2 injection at depth and the resulting surface deformation. The goal is to enable geophysicists to better interpret surface observations and predict outcomes from observed changes in reservoir conditions.

  3. Applying a Cerebellar Model Articulation Controller Neural Network to a Photovoltaic Power Generation System Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Kuei-Hsiang Chao

    2013-01-01

    Full Text Available This study employed a cerebellar model articulation controller (CMAC neural network to conduct fault diagnoses on photovoltaic power generation systems. We composed a module array using 9 series and 2 parallel connections of SHARP NT-R5E3E 175 W photovoltaic modules. In addition, we used data that were outputted under various fault conditions as the training samples for the CMAC and used this model to conduct the module array fault diagnosis after completing the training. The results of the training process and simulations indicate that the method proposed in this study requires fewer number of training times compared to other methods. In addition to significantly increasing the accuracy rate of the fault diagnosis, this model features a short training duration because the training process only tunes the weights of the exited memory addresses. Therefore, the fault diagnosis is rapid, and the detection tolerance of the diagnosis system is enhanced.

  4. A universal, fault-tolerant, non-linear analytic network for modeling and fault detection

    International Nuclear Information System (INIS)

    Mott, J.E.; King, R.W.; Monson, L.R.; Olson, D.L.; Staffon, J.D.

    1992-01-01

    The similarities and differences of a universal network to normal neural networks are outlined. The description and application of a universal network is discussed by showing how a simple linear system is modeled by normal techniques and by universal network techniques. A full implementation of the universal network as universal process modeling software on a dedicated computer system at EBR-II is described and example results are presented. It is concluded that the universal network provides different feature recognition capabilities than a neural network and that the universal network can provide extremely fast, accurate, and fault-tolerant estimation, validation, and replacement of signals in a real system

  5. A universal, fault-tolerant, non-linear analytic network for modeling and fault detection

    Energy Technology Data Exchange (ETDEWEB)

    Mott, J.E. [Advanced Modeling Techniques Corp., Idaho Falls, ID (United States); King, R.W.; Monson, L.R.; Olson, D.L.; Staffon, J.D. [Argonne National Lab., Idaho Falls, ID (United States)

    1992-03-06

    The similarities and differences of a universal network to normal neural networks are outlined. The description and application of a universal network is discussed by showing how a simple linear system is modeled by normal techniques and by universal network techniques. A full implementation of the universal network as universal process modeling software on a dedicated computer system at EBR-II is described and example results are presented. It is concluded that the universal network provides different feature recognition capabilities than a neural network and that the universal network can provide extremely fast, accurate, and fault-tolerant estimation, validation, and replacement of signals in a real system.

  6. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    Science.gov (United States)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the

  7. Implementation of a Wind Farm Turbine Control System with Short-Term Grid Faults Management

    DEFF Research Database (Denmark)

    Marra, Francesco; Rasmussen, Tonny Wederberg; Garcia-Valle, Rodrigo

    2010-01-01

    restrictions for the wind turbines behavior especially under grid faults. Wind turbines are requested to stay connected even during faults. These new requirements are challenging the control of the wind turbines and new control strategies are required to meet the target. This paper dealt...... with the implementation of a control strategy in order to stay connected under grid faults. The method aimed to ensure that a wind farm turbine remains connected and no electric power is delivered to the grid during the fault period. The overall system was modelled and simulated by using the software Matlab/Simulink.......The increased penetration of wind power in the grid has led to important technical barriers that limit the development, where the stability of the system plays a key issue. Grid operators in different countries are issuing new grid requirements, the so-called grid codes that impose more...

  8. Singular limit analysis of a model for earthquake faulting

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...

  9. Supervisory Fault Tolerant Control of the GTM UAV Using LPV Methods

    Directory of Open Access Journals (Sweden)

    Péni Tamás

    2015-03-01

    Full Text Available A multi-level reconfiguration framework is proposed for fault tolerant control of over-actuated aerial vehicles, where the levels indicate how much authority is given to the reconfiguration task. On the lowest, first level the fault is accommodated by modifying only the actuator/sensor configuration, so the fault remains hidden from the baseline controller. A dynamic reallocation scheme is applied on this level. The allocation mechanism exploits the actuator/sensor redundancy available on the aircraft. When the fault cannot be managed at the actuator/sensor level, the reconfiguration process has access to the baseline controller. Based on the LPV control framework, this is done by introducing fault-specific scheduling parameters. The baseline controller is designed to provide an acceptable performance level along all fault scenarios coded in these scheduling variables. The decision on which reconfiguration level has to be initiated in response to a fault is determined by a supervisor unit. The method is demonstrated on a full six-degrees-of-freedom nonlinear simulation model of the GTM UAV.

  10. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  11. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Panda, Dhabaleswar Kumar [The Ohio State University; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system through fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of

  12. Model-based fault detection for proton exchange membrane fuel cell ...

    African Journals Online (AJOL)

    In this paper, an intelligent model-based fault detection (FD) is developed for proton exchange membrane fuel cell (PEMFC) dynamic systems using an independent radial basis function (RBF) networks. The novelty is that this RBF networks is used to model the PEMFC dynamic systems and residuals are generated based ...

  13. Geomechanical Modeling of Fault Responses and the Potential for Notable Seismic Events during Underground CO2 Injection

    Science.gov (United States)

    Rutqvist, J.; Cappa, F.; Mazzoldi, A.; Rinaldi, A.

    2012-12-01

    The importance of geomechanics associated with large-scale geologic carbon storage (GCS) operations is now widely recognized. There are concerns related to the potential for triggering notable (felt) seismic events and how such events could impact the long-term integrity of a CO2 repository (as well as how it could impact the public perception of GCS). In this context, we review a number of modeling studies and field observations related to the potential for injection-induced fault reactivations and seismic events. We present recent model simulations of CO2 injection and fault reactivation, including both aseismic and seismic fault responses. The model simulations were conducted using a slip weakening fault model enabling sudden (seismic) fault rupture, and some of the numerical analyses were extended to fully dynamic modeling of seismic source, wave propagation, and ground motion. The model simulations illustrated what it will take to create a magnitude 3 or 4 earthquake that would not result in any significant damage at the groundsurface, but could raise concerns in the local community and could also affect the deep containment of the stored CO2. The analyses show that the local in situ stress field, fault orientation, fault strength, and injection induced overpressure are critical factors in determining the likelihood and magnitude of such an event. We like to clarify though that in our modeling we had to apply very high injection pressure to be able to intentionally induce any fault reactivation. Consequently, our model simulations represent extreme cases, which in a real GCS operation could be avoided by estimating maximum sustainable injection pressure and carefully controlling the injection pressure. In fact, no notable seismic event has been reported from any of the current CO2 storage projects, although some unfelt microseismic activities have been detected by geophones. On the other hand, potential future commercial GCS operations from large power plants

  14. Cellular modeling of fault-tolerant multicomputers

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, G

    1987-01-01

    Work described was concerned with a novel method for investigation of fault tolerance in large regular networks of computers. Motivation was to provide a technique useful in rapid evaluation of highly reliable systems that exploit the low cost and ease of volume production of simple microcomputer components. First, a system model and simulator based upon cellular automata are developed. This model is characterized by its simplicity and ease of modification when adapting to new types of network. Second, in order to test and verify the predictive capabilities of the cellular system, a more-detailed simulation is performed based upon an existing computational model, that of the Transputer. An example application is used to exercise various systems designed using the cellular model. Using this simulator, experimental results are obtained both for existing well-understood configurations and for more novel types also developed here. In all cases it was found that the cellular model and simulator successfully predicted the ranking in reliability improvement of the systems studied.

  15. Study of fault diagnosis software design for complex system based on fault tree

    International Nuclear Information System (INIS)

    Yuan Run; Li Yazhou; Wang Jianye; Hu Liqin; Wang Jiaqun; Wu Yican

    2012-01-01

    Complex systems always have high-level reliability and safety requirements, and same does their diagnosis work. As a great deal of fault tree models have been acquired during the design and operation phases, a fault diagnosis method which combines fault tree analysis with knowledge-based technology has been proposed. The prototype of fault diagnosis software has been realized and applied to mobile LIDAR system. (authors)

  16. Coordinated Fault Tolerance for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  17. Stochastic Model Predictive Fault Tolerant Control Based on Conditional Value at Risk for Wind Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Yun-Tao Shi

    2018-01-01

    Full Text Available Wind energy has been drawing considerable attention in recent years. However, due to the random nature of wind and high failure rate of wind energy conversion systems (WECSs, how to implement fault-tolerant WECS control is becoming a significant issue. This paper addresses the fault-tolerant control problem of a WECS with a probable actuator fault. A new stochastic model predictive control (SMPC fault-tolerant controller with the Conditional Value at Risk (CVaR objective function is proposed in this paper. First, the Markov jump linear model is used to describe the WECS dynamics, which are affected by many stochastic factors, like the wind. The Markov jump linear model can precisely model the random WECS properties. Second, the scenario-based SMPC is used as the controller to address the control problem of the WECS. With this controller, all the possible realizations of the disturbance in prediction horizon are enumerated by scenario trees so that an uncertain SMPC problem can be transformed into a deterministic model predictive control (MPC problem. Finally, the CVaR object function is adopted to improve the fault-tolerant control performance of the SMPC controller. CVaR can provide a balance between the performance and random failure risks of the system. The Min-Max performance index is introduced to compare the fault-tolerant control performance with the proposed controller. The comparison results show that the proposed method has better fault-tolerant control performance.

  18. A numerical model for modeling microstructure and THM couplings in fault gouges

    Science.gov (United States)

    Veveakis, M.; Rattez, H.; Stefanou, I.; Sulem, J.; Poulet, T.

    2017-12-01

    When materials are subjected to large deformations, most of them experience inelastic deformations, accompanied by a localization of these deformations into a narrow zone leading to failure. Localization is seen as an instability from the homogeneous state of deformation. Therefore a first approach to study it consists at looking at the possible critical conditions for which the constitutive equations of the material allow a bifurcation point (Rudnicki & Rice 1975). But in some cases, we would like to know the evolution of the material after the onset of localization. For example, a fault in the crustal part of the lithosphere is a shear band and the study of this localized zone enables to extract information about seismic slip. For that, we need to approximate the solution of a nonlinear boundary value problem numerically. It is a challenging task due to the complications that arise while dealing with a softening behavior. Indeed, the classical continuum theory cannot be used because the governing system of equations is ill-posed (Vardoulakis 1985). This ill-posedness can be tracked back to the fact that constitutive models don't contain material parameters with the dimension of a length. It leads to what is called "mesh dependency" for numerical simulations, as the deformations localize in only one element of the mesh and the behavior of the system depends thus on the mesh size. A way to regularize the problem is to resort to continuum models with microstructure, such as Cosserat continua (Sulem et al. 2011). Cosserat theory is particularly interesting as it can explicitly take into account the size of the microstructure in a fault gouge. Basically, it introduces 3 degrees of freedom of rotation on top of the 3 translations (Godio et al. 2016). The original work of (Mühlhaus & Vardoulakis 1987) is extended in 3D and thermo-hydro mechanical couplings are added to the model to study fault system in the crustal part of the lithosphere. The system of equations is

  19. Fault Isolation for Shipboard Decision Support

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2010-01-01

    Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation of a containe......Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation...... to the quality of decisions given to navigators....

  20. Design for interaction between humans and intelligent systems during real-time fault management

    Science.gov (United States)

    Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.

    1992-01-01

    Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.

  1. Modeling and Fault Diagnosis of Interturn Short Circuit for Five-Phase Permanent Magnet Synchronous Motor

    Directory of Open Access Journals (Sweden)

    Jian-wei Yang

    2015-01-01

    Full Text Available Taking advantage of the high reliability, multiphase permanent magnet synchronous motors (PMSMs, such as five-phase PMSM and six-phase PMSM, are widely used in fault-tolerant control applications. And one of the important fault-tolerant control problems is fault diagnosis. In most existing literatures, the fault diagnosis problem focuses on the three-phase PMSM. In this paper, compared to the most existing fault diagnosis approaches, a fault diagnosis method for Interturn short circuit (ITSC fault of five-phase PMSM based on the trust region algorithm is presented. This paper has two contributions. (1 Analyzing the physical parameters of the motor, such as resistances and inductances, a novel mathematic model for ITSC fault of five-phase PMSM is established. (2 Introducing an object function related to the Interturn short circuit ratio, the fault parameters identification problem is reformulated as the extreme seeking problem. A trust region algorithm based parameter estimation method is proposed for tracking the actual Interturn short circuit ratio. The simulation and experimental results have validated the effectiveness of the proposed parameter estimation method.

  2. A Ship Propulsion System Model for Fault-tolerant Control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    This report presents a propulsion system model for a low speed marine vehicle, which can be used as a test benchmark for Fault-Tolerant Control purposes. The benchmark serves the purpose of offering realistic and challenging problems relevant in both FDI and (autonomous) supervisory control area...

  3. Fault-tolerant computing systems

    International Nuclear Information System (INIS)

    Dal Cin, M.; Hohl, W.

    1991-01-01

    Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

  4. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    Science.gov (United States)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  5. Hydrogeological measurements and modelling of the Down Ampney Fault Research site

    International Nuclear Information System (INIS)

    Brightman, M.A.; Sen, M.A.; Abbott, M.A.W.

    1991-01-01

    The British Geological Survey, in cooperation with ISMES of Italy, is carrying out a research programme into the properties of faults cutting clay formations. The programme has two major aims; firstly, to develop geophysical techniques to locate and measure the geophysical properties of a fault in clay; secondly, to measure the hydrogeological properties of the fault and its effect on the groundwater flow pattern through a sequence of clays and aquifers. Analysis of pulse tests performed in the clays at the Down Ampney Research site gave values of hydraulic conductivity ranging from 5 x 10 -12 to 2 x 10 -8 ms -1 . Numerical modelling of the effects of groundwater abstraction from nearby wells on the site was performed using the finite element code FEMWATER. The results are discussed. (Author)

  6. Modeling the effect of preexisting joints on normal fault geometries using a brittle and cohesive material

    Science.gov (United States)

    Kettermann, M.; van Gent, H. W.; Urai, J. L.

    2012-04-01

    Brittle rocks, such as for example those hosting many carbonate or sandstone reservoirs, are often affected by different kinds of fractures that influence each other. Understanding the effects of these interactions on fault geometries and the formation of cavities and potential fluid pathways might be useful for reservoir quality prediction and production. Analogue modeling has proven to be a useful tool to study faulting processes, although usually the used materials do not provide cohesion and tensile strength, which are essential to create open fractures. Therefore, very fine-grained, cohesive, hemihydrate powder was used for our experiments. The mechanical properties of the material are scaling well for natural prototypes. Due to the fine grain size structures are preserved in in great detail. The used deformation box allows the formation of a half-graben and has initial dimensions of 30 cm width, 28 cm length and 20 cm height. The maximum dip-slip along the 60° dipping predefined basement fault is 4.5 cm and was fully used in all experiments. To setup open joints prior to faulting, sheets of paper placed vertically within the box to a depth of about 5 cm from top. The powder was then sieved into the box, embedding the paper almost entirely. Finally strings were used to remove the paper carefully, leaving open voids. Using this method allows the creation of cohesionless open joints while ensuring a minimum impact on the sensitive surrounding material. The presented series of experiments aims to investigate the effect of different angles between the strike of a rigid basement fault and a distinct joint set. All experiments were performed with a joint spacing of 2.5 cm and the fault-joint angles incrementally covered 0°, 4°, 8°, 12°, 16°, 20° and 25°. During the deformation time lapse photography from the top and side captured every structural change and provided data for post-processing analysis using particle imaging velocimetry (PIV). Additionally

  7. A stacking-fault based microscopic model for platelets in diamond

    Science.gov (United States)

    Antonelli, Alex; Nunes, Ricardo

    2005-03-01

    We propose a new microscopic model for the 001 planar defects in diamond commonly called platelets. This model is based on the formation of a metastable stacking fault, which can occur because of the ability of carbon to stabilize in different bonding configurations. In our model the core of the planar defect is basically a double layer of three-fold coordinated sp^2 carbon atoms embedded in the common sp^3 diamond structure. The properties of the model were determined using ab initio total energy calculations. All significant experimental signatures attributed to the platelets, namely, the lattice displacement along the [001] direction, the asymmetry between the [110] and the [11 0] directions, the infrared absorption peak B^' , and broad luminescence lines that indicate the introduction of levels in the band gap, are naturally accounted for in our model. The model is also very appealing from the point of view of kinetics, since naturally occurring shearing processes will lead to the formation of the metastable fault.Authors acknowledge financial support from the Brazilian agencies FAPESP, CNPq, FAEP-UNICAMP, FAPEMIG, and Instituto do Milênio em Nanociências-MCT

  8. Fault diagnostics in power transformer model winding for different alpha values

    Directory of Open Access Journals (Sweden)

    G.H. Kusumadevi

    2015-09-01

    Full Text Available Transient overvoltages appearing at line terminal of power transformer HV windings can cause failure of winding insulation. The failure can be from winding to ground or between turns or sections of winding. In most of the cases, failure from winding to ground can be detected by changes in the wave shape of surge voltage appearing at line terminal. However, detection of insulation failure between turns may be difficult due to intricacies involved in identifications of faults. In this paper, simulation investigations carried out on a power transformer model winding for identifying faults between turns of winding has been reported. The power transformer HV winding has been represented by 8 sections, 16 sections and 24 sections. Neutral current waveform has been analyzed for same model winding represented by different number of sections. The values of α (‘α’ value is the square root of total ground capacitance to total series capacitance of winding considered for windings are 5, 10 and 20. Standard lightning impulse voltage (1.2/50 μs wave shape have been considered for analysis. Computer simulations have been carried out using software PSPICE version 10.0. Neutral current and frequency response analysis methods have been used for identification of faults within sections of transformer model winding.

  9. Three-dimensional cellular automata as a model of a seismic fault

    International Nuclear Information System (INIS)

    Gálvez, G; Muñoz, A

    2017-01-01

    The Earth's crust is broken into a series of plates, whose borders are the seismic fault lines and it is where most of the earthquakes occur. This plating system can in principle be described by a set of nonlinear coupled equations describing the motion of the plates, its stresses, strains and other characteristics. Such a system of equations is very difficult to solve, and nonlinear parts leads to a chaotic behavior, which is not predictable. In 1989, Bak and Tang presented an earthquake model based on the sand pile cellular automata. The model though simple, provides similar results to those observed in actual earthquakes. In this work the cellular automata in three dimensions is proposed as a best model to approximate a seismic fault. It is noted that the three-dimensional model reproduces similar properties to those observed in real seismicity, especially, the Gutenberg-Richter law. (paper)

  10. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  11. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  12. Modelling of Surface Fault Structures Based on Ground Magnetic Survey

    Science.gov (United States)

    Michels, A.; McEnroe, S. A.

    2017-12-01

    The island of Leka confines the exposure of the Leka Ophiolite Complex (LOC) which contains mantle and crustal rocks and provides a rare opportunity to study the magnetic properties and response of these formations. The LOC is comprised of five rock units: (1) harzburgite that is strongly deformed, shifting into an increasingly olivine-rich dunite (2) ultramafic cumulates with layers of olivine, chromite, clinopyroxene and orthopyroxene. These cumulates are overlain by (3) metagabbros, which are cut by (4) metabasaltic dykes and (5) pillow lavas (Furnes et al. 1988). Over the course of three field seasons a detailed ground-magnetic survey was made over the island covering all units of the LOC and collecting samples from 109 sites for magnetic measurements. NRM, susceptibility, density and hysteresis properties were measured. In total 66% of samples with a Q value > 1, suggests that the magnetic anomalies should include both induced and remanent components in the model.This Ophiolite originated from a suprasubduction zone near the coast of Laurentia (497±2 Ma), was obducted onto Laurentia (≈460 Ma) and then transferred to Baltica during the Caledonide Orogeny (≈430 Ma). The LOC was faulted, deformed and serpentinized during these events. The gabbro and ultramafic rocks are separated by a normal fault. The dominant magnetic anomaly that crosses the island correlates with this normal fault. There are a series of smaller scale faults that are parallel to this and some correspond to local highs that can be highlighted by a tilt derivative of the magnetic data. These fault boundaries which are well delineated by the distinct magnetic anomalies in both ground and aeromagnetic survey data are likely caused by increased amount of serpentinization of the ultramafic rocks in the fault areas.

  13. Frictional-faulting model for harmonic tremor before Redoubt Volcano eruptions

    Science.gov (United States)

    Dmitrieva, Ksenia; Hotovec-Ellis, Alicia J.; Prejean, Stephanie G.; Dunham, Eric M.

    2013-01-01

    Seismic unrest, indicative of subsurface magma transport and pressure changes within fluid-filled cracks and conduits, often precedes volcanic eruptions. An intriguing form of volcano seismicity is harmonic tremor, that is, sustained vibrations in the range of 0.5–5 Hz. Many source processes can generate harmonic tremor. Harmonic tremor in the 2009 eruption of Redoubt Volcano, Alaska, has been linked to repeating earthquakes of magnitudes around 0.5–1.5 that occur a few kilometres beneath the vent. Before many explosions in that eruption, these small earthquakes occurred in such rapid succession—up to 30 events per second—that distinct seismic wave arrivals blurred into continuous, high-frequency tremor. Tremor abruptly ceased about 30 s before the explosions. Here we introduce a frictional-faulting model to evaluate the credibility and implications of this tremor mechanism. We find that the fault stressing rates rise to values ten orders of magnitude higher than in typical tectonic settings. At that point, inertial effects stabilize fault sliding and the earthquakes cease. Our model of the Redoubt Volcano observations implies that the onset of volcanic explosions is preceded by active deformation and extreme stressing within a localized region of the volcano conduit, at a depth of several kilometres.

  14. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-01

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  15. Intelligent fault diagnosis and failure management of flight control actuation systems

    Science.gov (United States)

    Bonnice, William F.; Baker, Walter

    1988-01-01

    The real-time fault diagnosis and failure management (FDFM) of current operational and experimental dual tandem aircraft flight control system actuators was investigated. Dual tandem actuators were studied because of the active FDFM capability required to manage the redundancy of these actuators. The FDFM methods used on current dual tandem actuators were determined by examining six specific actuators. The FDFM capability on these six actuators was also evaluated. One approach for improving the FDFM capability on dual tandem actuators may be through the application of artificial intelligence (AI) technology. Existing AI approaches and applications of FDFM were examined and evaluated. Based on the general survey of AI FDFM approaches, the potential role of AI technology for real-time actuator FDFM was determined. Finally, FDFM and maintainability improvements for dual tandem actuators were recommended.

  16. Fuzzy fault diagnosis system of MCFC

    Institute of Scientific and Technical Information of China (English)

    Wang Zhenlei; Qian Feng; Cao Guangyi

    2005-01-01

    A kind of fault diagnosis system of molten carbonate fuel cell (MCFC) stack is proposed in this paper. It is composed of a fuzzy neural network (FNN) and a fault diagnosis element. FNN is able to deal with the information of the expert knowledge and the experiment data efficiently. It also has the ability to approximate any smooth system. FNN is used to identify the fault diagnosis model of MCFC stack. The fuzzy fault decision element can diagnose the state of the MCFC generating system, normal or fault, and can decide the type of the fault based on the outputs of FNN model and the MCFC system. Some simulation experiment results are demonstrated in this paper.

  17. Tsunamigenic earthquakes in the Gulf of Cadiz: fault model and recurrence

    Directory of Open Access Journals (Sweden)

    L. M. Matias

    2013-01-01

    Full Text Available The Gulf of Cadiz, as part of the Azores-Gibraltar plate boundary, is recognized as a potential source of big earthquakes and tsunamis that may affect the bordering countries, as occurred on 1 November 1755. Preparing for the future, Portugal is establishing a national tsunami warning system in which the threat caused by any large-magnitude earthquake in the area is estimated from a comprehensive database of scenarios. In this paper we summarize the knowledge about the active tectonics in the Gulf of Cadiz and integrate the available seismological information in order to propose the generation model of destructive tsunamis to be applied in tsunami warnings. The fault model derived is then used to estimate the recurrence of large earthquakes using the fault slip rates obtained by Cunha et al. (2012 from thin-sheet neotectonic modelling. Finally we evaluate the consistency of seismicity rates derived from historical and instrumental catalogues with the convergence rates between Eurasia and Nubia given by plate kinematic models.

  18. Analogue Modeling of Oblique Convergent Strike-Slip Faulting and Application to The Seram Island, Eastern Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2014-12-01

    Full Text Available DOI:10.17014/ijog.v1i3.189Sandbox experiment is one of the types of analogue modeling in geological sciences in which the main purpose is simulating deformation style and structural evolution of the sedimentary basin.  Sandbox modeling is one of the effective ways in conducting physically modeling and evaluates complex deformation of sedimentary rocks. The main purpose of this paper is to evaluate structural geometry and deformation history of oblique convergent deformation using of integrated technique of analogue sandbox modeling applying to deformation of Seram Fold-Thrust-Belt (SFTB in the Seram Island, Eastern Indonesia. Oblique convergent strike-slip deformation has notoriously generated area with structural complex geometry and pattern resulted from role of various local parameters that control stress distributions. Therefore, a special technique is needed for understanding and solving such problem in particular to relate 3D fault geometry and its evolution. The result of four case (Case 1 to 4 modeling setting indicated that two of modeling variables clearly affected in our sandbox modeling results; these are lithological variation (mainly stratigraphy of Seram Island and pre-existing basement fault geometry (basement configuration. Lithological variation was mainly affected in the total number of faults development.  On the other hand, pre-existing basement fault geometry was highly influenced in the end results particularly fault style and pattern as demonstrated in Case 4 modeling.  In addition, this study concluded that deformation in the Seram Island is clearly best described using oblique convergent strike-slip (transpression stress system.

  19. Fault Detection for Automotive Shock Absorber

    Science.gov (United States)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  20. A New Paradigm For Modeling Fault Zone Inelasticity: A Multiscale Continuum Framework Incorporating Spontaneous Localization and Grain Fragmentation.

    Science.gov (United States)

    Elbanna, A. E.

    2015-12-01

    The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.

  1. V and V-based remaining fault estimation model for safety–critical software of a nuclear power plant

    International Nuclear Information System (INIS)

    Eom, Heung-seop; Park, Gee-yong; Jang, Seung-cheol; Son, Han Seong; Kang, Hyun Gook

    2013-01-01

    Highlights: ► A software fault estimation model based on Bayesian Nets and V and V. ► Use of quantified data derived from qualitative V and V results. ► Faults insertion and elimination process was modeled in the context of probability. ► Systematically estimates the expected number of remaining faults. -- Abstract: Quantitative software reliability measurement approaches have some limitations in demonstrating the proper level of reliability in cases of safety–critical software. One of the more promising alternatives is the use of software development quality information. Particularly in the nuclear industry, regulatory bodies in most countries use both probabilistic and deterministic measures for ensuring the reliability of safety-grade digital computers in NPPs. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety–critical software. In addition software Verification and Validation (V and V) play an important role in this process. In this light, we propose a V and V-based fault estimation method using Bayesian Nets to estimate the remaining faults for safety–critical software after the software development life cycle is completed. By modeling the fault insertion and elimination processes during the whole development phases, the proposed method systematically estimates the expected number of remaining faults.

  2. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    Science.gov (United States)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  3. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  4. Low-frequency scaling applied to stochastic finite-fault modeling

    Science.gov (United States)

    Crane, Stephen; Motazedian, Dariush

    2014-01-01

    Stochastic finite-fault modeling is an important tool for simulating moderate to large earthquakes. It has proven to be useful in applications that require a reliable estimation of ground motions, mostly in the spectral frequency range of 1 to 10 Hz, which is the range of most interest to engineers. However, since there can be little resemblance between the low-frequency spectra of large and small earthquakes, this portion can be difficult to simulate using stochastic finite-fault techniques. This paper introduces two different methods to scale low-frequency spectra for stochastic finite-fault modeling. One method multiplies the subfault source spectrum by an empirical function. This function has three parameters to scale the low-frequency spectra: the level of scaling and the start and end frequencies of the taper. This empirical function adjusts the earthquake spectra only between the desired frequencies, conserving seismic moment in the simulated spectra. The other method is an empirical low-frequency coefficient that is added to the subfault corner frequency. This new parameter changes the ratio between high and low frequencies. For each simulation, the entire earthquake spectra is adjusted, which may result in the seismic moment not being conserved for a simulated earthquake. These low-frequency scaling methods were used to reproduce recorded earthquake spectra from several earthquakes recorded in the Pacific Earthquake Engineering Research Center (PEER) Next Generation Attenuation Models (NGA) database. There were two methods of determining the stochastic parameters of best fit for each earthquake: a general residual analysis and an earthquake-specific residual analysis. Both methods resulted in comparable values for stress drop and the low-frequency scaling parameters; however, the earthquake-specific residual analysis obtained a more accurate distribution of the averaged residuals.

  5. Ring-fault activity at subsiding calderas studied from analogue experiments and numerical modeling

    Science.gov (United States)

    Liu, Y. K.; Ruch, J.; Vasyura-Bathke, H.; Jonsson, S.

    2017-12-01

    Several subsiding calderas, such as the ones in the Galápagos archipelago and the Axial seamount in the Pacific Ocean have shown a complex but similar ground deformation pattern, composed of a broad deflation signal affecting the entire volcanic edifice and of a localized subsidence signal focused within the caldera. However, it is still debated how deep processes at subsiding calderas, including magmatic pressure changes, source locations and ring-faulting, relate to this observed surface deformation pattern. We combine analogue sandbox experiments with numerical modeling to study processes involved from initial subsidence to later collapse of calderas. The sandbox apparatus is composed of a motor driven subsiding half-piston connected to the bottom of a glass box. During the experiments the observation is done by five digital cameras photographing from various perspectives. We use Photoscan, a photogrammetry software and PIVLab, a time-resolved digital image correlation tool, to retrieve time-series of digital elevation models and velocity fields from acquired photographs. This setup allows tracking the processes acting both at depth and at the surface, and to assess their relative importance as the subsidence evolves to a collapse. We also use the Boundary Element Method to build a numerical model of the experiment setup, which comprises contracting sill-like source in interaction with a ring-fault in elastic half-space. We then compare our results from these two approaches with the examples observed in nature. Our preliminary experimental and numerical results show that at the initial stage of magmatic withdrawal, when the ring-fault is not yet well formed, broad and smooth deflation dominates at the surface. As the withdrawal increases, narrower subsidence bowl develops accompanied by the upward propagation of the ring-faulting. This indicates that the broad deflation, affecting the entire volcano edifice, is primarily driven by the contraction of the

  6. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    Science.gov (United States)

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability

  7. The contribution to distribution network fault levels from the connection of distributed generation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    This report summarises the findings of a study investigating the potential impact of distributed generation (DG) on the UK distribution network fault levels up to the year 2010, and examining ways of managing the fault levels so that they do not become a barrier to increased penetration of DG. The project focuses on the circumstances and scenarios that give rise to the fault levels. The background to the study is traced, and a technical review is presented covering the relationship between DG and fault levels, and the likely impact in the period to 2010. Options for managing increased fault levels, and fault level management and costs are outlined, and a case study is given. The measurement and calculation of fault level values are described along with constraints to DG penetration due to fault level limitations, characteristics of DG machines, and long term perspectives to 2020-2030.

  8. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model.

    Science.gov (United States)

    Daout, S; Barbot, S; Peltzer, G; Doin, M-P; Liu, Z; Jolivet, R

    2016-11-16

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  9. Application of damping mechanism model and stacking fault probability in Fe-Mn alloy

    International Nuclear Information System (INIS)

    Huang, S.K.; Wen, Y.H.; Li, N.; Teng, J.; Ding, S.; Xu, Y.G.

    2008-01-01

    In this paper, the damping mechanism model of Fe-Mn alloy was analyzed using dislocation theory. Moreover, as an important parameter in Fe-Mn based alloy, the effect of stacking fault probability on the damping capacity of Fe-19.35Mn alloy after deep-cooling or tensile deformation was also studied. The damping capacity was measured using reversal torsion pendulum. The stacking fault probability of γ-austenite and ε-martensite was determined by means of X-ray diffraction (XRD) profile analysis. The microstructure was observed using scanning electronic microscope (SEM). The results indicated that with the strain amplitude increasing above a critical value, the damping capacity of Fe-19.35Mn alloy increased rapidly which could be explained using the breakaway model of Shockley partial dislocations. Deep-cooling and suitable tensile deformation could improve the damping capacity owning to the increasing of stacking fault probability of Fe-19.35Mn alloy

  10. Stress near geometrically complex strike-slip faults - Application to the San Andreas fault at Cajon Pass, southern California

    Science.gov (United States)

    Saucier, Francois; Humphreys, Eugene; Weldon, Ray, II

    1992-01-01

    A model is presented to rationalize the state of stress near a geometrically complex major strike-slip fault. Slip on such a fault creates residual stresses that, with the occurrence of several slip events, can dominate the stress field near the fault. The model is applied to the San Andreas fault near Cajon Pass. The results are consistent with the geological features, seismicity, the existence of left-lateral stress on the Cleghorn fault, and the in situ stress orientation in the scientific well, found to be sinistral when resolved on a plane parallel to the San Andreas fault. It is suggested that the creation of residual stresses caused by slip on a wiggle San Andreas fault is the dominating process there.

  11. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  12. Heterogeneous slip and rupture models of the San Andreas fault zone based upon three-dimensional earthquake tomography

    Energy Technology Data Exchange (ETDEWEB)

    Foxall, William [Univ. of California, Berkeley, CA (United States)

    1992-11-01

    Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.

  13. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    Science.gov (United States)

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  14. Empirical Relationships Among Magnitude and Surface Rupture Characteristics of Strike-Slip Faults: Effect of Fault (System) Geometry and Observation Location, Dervided From Numerical Modeling

    Science.gov (United States)

    Zielke, O.; Arrowsmith, J.

    2007-12-01

    In order to determine the magnitude of pre-historic earthquakes, surface rupture length, average and maximum surface displacement are utilized, assuming that an earthquake of a specific size will cause surface features of correlated size. The well known Wells and Coppersmith (1994) paper and other studies defined empirical relationships between these and other parameters, based on historic events with independently known magnitude and rupture characteristics. However, these relationships show relatively large standard deviations and they are based only on a small number of events. To improve these first-order empirical relationships, the observation location relative to the rupture extent within the regional tectonic framework should be accounted for. This however cannot be done based on natural seismicity because of the limited size of datasets on large earthquakes. We have developed the numerical model FIMozFric, based on derivations by Okada (1992) to create synthetic seismic records for a given fault or fault system under the influence of either slip- or stress boundary conditions. Our model features A) the introduction of an upper and lower aseismic zone, B) a simple Coulomb friction law, C) bulk parameters simulating fault heterogeneity, and D) a fault interaction algorithm handling the large number of fault patches (typically 5,000-10,000). The joint implementation of these features produces well behaved synthetic seismic catalogs and realistic relationships among magnitude and surface rupture characteristics which are well within the error of the results by Wells and Coppersmith (1994). Furthermore, we use the synthetic seismic records to show that the relationships between magntiude and rupture characteristics are a function of the observation location within the regional tectonic framework. The model presented here can to provide paleoseismologists with a tool to improve magnitude estimates from surface rupture characteristics, by incorporating the

  15. Correlation of data on strain accumulation adjacent to the San Andreas Fault with available models

    Science.gov (United States)

    Turcotte, Donald L.

    1986-01-01

    Theoretical and numerical studies of deformation on strike slip faults were performed and the results applied to geodetic observations performed in the vicinity of the San Andreas Fault in California. The initial efforts were devoted to an extensive series of finite element calculations of the deformation associated with cyclic displacements on a strike-slip fault. Measurements of strain accumulation adjacent to the San Andreas Fault indicate that the zone of strain accumulation extends only a few tens of kilometers away from the fault. There is a concern about the tendency to make geodetic observations along the line to the source. This technique has serious problems for strike slip faults since the vector velocity is also along the fault. Use of a series of stations lying perpendicular to the fault whose positions are measured relative to a reference station are suggested to correct the problem. The complexity of faulting adjacent to the San Andreas Fault indicated that the homogeneous elastic and viscoelastic approach to deformation had serious limitations. These limitation led to the proposal of an approach that assumes a fault is composed of a distribution of asperities and barriers on all scales. Thus, an earthquake on a fault is treated as a failure of a fractal tree. Work continued on the development of a fractal based model for deformation in the western United States. In order to better understand the distribution of seismicity on the San Andreas Fault system a fractal analog was developed. The fractal concept also provides a means of testing whether clustering in time or space is a scale-invariant process.

  16. Fault Detection for Shipboard Monitoring – Volterra Kernel and Hammerstein Model Approaches

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2009-01-01

    In this paper nonlinear fault detection for in-service monitoring and decision support systems for ships will be presented. The ship is described as a nonlinear system, and the stochastic wave elevation and the associated ship responses are conveniently modelled in frequency domain. The transform....... The transformation from time domain to frequency domain has been conducted by use of Volterra theory. The paper takes as an example fault detection of a containership on which a decision support system has been installed....

  17. Analysis of the fault geometry of a Cenozoic salt-related fault close to the D-1 well, Danish North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Roenoe Clausen, O.; Petersen, K.; Korstgaard, A.

    1995-12-31

    A normal detaching fault in the Norwegian-Danish Basin around the D-1 well (the D-1 faults) has been mapped using seismic sections. The fault has been analysed in detail by constructing backstripped-decompacted sections across the fault, contoured displacement diagrams along the fault, and vertical displacement maps. The result shows that the listric D-1 fault follows the displacement patterns for blind normal faults. Deviations from the ideal displacement pattern is suggested to be caused by salt-movements, which is the main driving mechanisms for the faulting. Zechstein salt moves primarily from the hanging wall to the footwall and is superposed by later minor lateral flow beneath the footwall. Back-stripping of depth-converted and decompacted sections results in an estimation of the salt-surface and the shape of the fault through time. This procedure then enables a simple modelling of the hanging wall deformation using a Chevron model with hanging wall collapse along dipping surfaces. The modelling indicates that the fault follows the salt surface until the Middle Miocene after which the offset on the fault also may be accommodated along the Top Chalk surface. (au) 16 refs.

  18. Fault Tolerant Wind Farm Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2013-01-01

    In the recent years the wind turbine industry has focused on optimizing the cost of energy. One of the important factors in this is to increase reliability of the wind turbines. Advanced fault detection, isolation and accommodation are important tools in this process. Clearly most faults are deal...... scenarios. This benchmark model is used in an international competition dealing with Wind Farm fault detection and isolation and fault tolerant control....

  19. Model-based fault detection and isolation of a PWR nuclear power plant using neural networks

    International Nuclear Information System (INIS)

    Far, R.R.; Davilu, H.; Lucas, C.

    2008-01-01

    The proper and timely fault detection and isolation of industrial plant is of premier importance to guarantee the safe and reliable operation of industrial plants. The paper presents application of a neural networks-based scheme for fault detection and isolation, for the pressurizer of a PWR nuclear power plant. The scheme is constituted by 2 components: residual generation and fault isolation. The first component generates residuals via the discrepancy between measurements coming from the plant and a nominal model. The neutral network estimator is trained with healthy data collected from a full-scale simulator. For the second component detection thresholds are used to encode the residuals as bipolar vectors which represent fault patterns. These patterns are stored in an associative memory based on a recurrent neutral network. The proposed fault diagnosis tool is evaluated on-line via a full-scale simulator detected and isolate the main faults appearing in the pressurizer of a PWR. (orig.)

  20. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  1. Fault Detection of Reciprocating Compressors using a Model from Principles Component Analysis of Vibrations

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A D

    2012-01-01

    Traditional vibration monitoring techniques have found it difficult to determine a set of effective diagnostic features due to the high complexity of the vibration signals originating from the many different impact sources and wide ranges of practical operating conditions. In this paper Principal Component Analysis (PCA) is used for selecting vibration feature and detecting different faults in a reciprocating compressor. Vibration datasets were collected from the compressor under baseline condition and five common faults: valve leakage, inter-cooler leakage, suction valve leakage, loose drive belt combined with intercooler leakage and belt loose drive belt combined with suction valve leakage. A model using five PCs has been developed using the baseline data sets and the presence of faults can be detected by comparing the T 2 and Q values from the features of fault vibration signals with corresponding thresholds developed from baseline data. However, the Q -statistic procedure produces a better detection as it can separate the five faults completely.

  2. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  3. HAVmS: Highly Available Virtual Machine Computer System Fault Tolerant with Automatic Failback and Close to Zero Downtime

    Directory of Open Access Journals (Sweden)

    Memmo Federici

    2014-12-01

    Full Text Available In scientic computing, systems often manage computations that require continuous acquisition of of satellite data and the management of large databases, as well as the execution of analysis software and simulation models (e.g. Monte Carlo or molecular dynamics cell simulations which may require several weeks of continuous run. These systems, consequently, should ensure the continuity of operation even in case of serious faults. HAVmS (High Availability Virtual machine System is a highly available, "fault tolerant" system with zero downtime in case of fault. It is based on the use of Virtual Machines and implemented by two servers with similar characteristics. HAVmS, thanks to the developed software solutions, is unique in its kind since it automatically failbacks once faults have been fixed. The system has been designed to be used both with professional or inexpensive hardware and supports the simultaneous execution of multiple services such as: web, mail, computing and administrative services, uninterrupted computing, data base management. Finally the system is cost effective adopting exclusively open source solutions, is easily manageable and for general use.

  4. Software fault detection and recovery in critical real-time systems: An approach based on loose coupling

    International Nuclear Information System (INIS)

    Alho, Pekka; Mattila, Jouni

    2014-01-01

    Highlights: •We analyze fault tolerance in mission-critical real-time systems. •Decoupled architectural model can be used to implement fault tolerance. •Prototype implementation for remote handling control system and service manager. •Recovery from transient faults by restarting services. -- Abstract: Remote handling (RH) systems are used to inspect, make changes to, and maintain components in the ITER machine and as such are an example of mission-critical system. Failure in a critical system may cause damage, significant financial losses and loss of experiment runtime, making dependability one of their most important properties. However, even if the software for RH control systems has been developed using best practices, the system might still fail due to undetected faults (bugs), hardware failures, etc. Critical systems therefore need capability to tolerate faults and resume operation after their occurrence. However, design of effective fault detection and recovery mechanisms poses a challenge due to timeliness requirements, growth in scale, and complex interactions. In this paper we evaluate effectiveness of service-oriented architectural approach to fault tolerance in mission-critical real-time systems. We use a prototype implementation for service management with an experimental RH control system and industrial manipulator. The fault tolerance is based on using the high level of decoupling between services to recover from transient faults by service restarts. In case the recovery process is not successful, the system can still be used if the fault was not in a critical software module

  5. Software fault detection and recovery in critical real-time systems: An approach based on loose coupling

    Energy Technology Data Exchange (ETDEWEB)

    Alho, Pekka, E-mail: pekka.alho@tut.fi; Mattila, Jouni

    2014-10-15

    Highlights: •We analyze fault tolerance in mission-critical real-time systems. •Decoupled architectural model can be used to implement fault tolerance. •Prototype implementation for remote handling control system and service manager. •Recovery from transient faults by restarting services. -- Abstract: Remote handling (RH) systems are used to inspect, make changes to, and maintain components in the ITER machine and as such are an example of mission-critical system. Failure in a critical system may cause damage, significant financial losses and loss of experiment runtime, making dependability one of their most important properties. However, even if the software for RH control systems has been developed using best practices, the system might still fail due to undetected faults (bugs), hardware failures, etc. Critical systems therefore need capability to tolerate faults and resume operation after their occurrence. However, design of effective fault detection and recovery mechanisms poses a challenge due to timeliness requirements, growth in scale, and complex interactions. In this paper we evaluate effectiveness of service-oriented architectural approach to fault tolerance in mission-critical real-time systems. We use a prototype implementation for service management with an experimental RH control system and industrial manipulator. The fault tolerance is based on using the high level of decoupling between services to recover from transient faults by service restarts. In case the recovery process is not successful, the system can still be used if the fault was not in a critical software module.

  6. Fault Tolerant Control of Wind Turbines

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Kinnaert, Michel

    2013-01-01

    This paper presents a test benchmark model for the evaluation of fault detection and accommodation schemes. This benchmark model deals with the wind turbine on a system level, and it includes sensor, actuator, and system faults, namely faults in the pitch system, the drive train, the generator......, and the converter system. Since it is a system-level model, converter and pitch system models are simplified because these are controlled by internal controllers working at higher frequencies than the system model. The model represents a three-bladed pitch-controlled variable-speed wind turbine with a nominal power...

  7. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  8. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    Science.gov (United States)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  9. A dependability modeling of software under memory faults for digital system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, J. G.; Seong, P. H.

    1997-01-01

    In this work, an analytic approach to the dependability of software in the operational phase is suggested with special attention to the hardware fault effects on the software behavior : The hardware faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory which represents the software with graph composed of nodes and arcs. Through proper transformation, the graph can be reduced to a simple two-node graph and the software reliability is derived from this graph. Using this model, we predict the reliability of an application software in the digital system (ILS) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. We also found that the effects of the hardware faults on the software failure should be considered for predicting the software dependability accurately in operation phase, especially for the software which is executed frequently. This modeling method is particularly attractive for the medium size programs such as the microprocessor-based nuclear safety logic program. (author)

  10. From Geodetic Imaging of Seismic and Aseismic Fault Slip to Dynamic Modeling of the Seismic Cycle

    Science.gov (United States)

    Avouac, Jean-Philippe

    2015-05-01

    Understanding the partitioning of seismic and aseismic fault slip is central to seismotectonics as it ultimately determines the seismic potential of faults. Thanks to advances in tectonic geodesy, it is now possible to develop kinematic models of the spatiotemporal evolution of slip over the seismic cycle and to determine the budget of seismic and aseismic slip. Studies of subduction zones and continental faults have shown that aseismic creep is common and sometimes prevalent within the seismogenic depth range. Interseismic coupling is generally observed to be spatially heterogeneous, defining locked patches of stress accumulation, to be released in future earthquakes or aseismic transients, surrounded by creeping areas. Clay-rich tectonites, high temperature, and elevated pore-fluid pressure seem to be key factors promoting aseismic creep. The generally logarithmic time evolution of afterslip is a distinctive feature of creeping faults that suggests a logarithmic dependency of fault friction on slip rate, as observed in laboratory friction experiments. Most faults can be considered to be paved with interlaced patches where the friction law is either rate-strengthening, inhibiting seismic rupture propagation, or rate-weakening, allowing for earthquake nucleation. The rate-weakening patches act as asperities on which stress builds up in the interseismic period; they might rupture collectively in a variety of ways. The pattern of interseismic coupling can help constrain the return period of the maximum- magnitude earthquake based on the requirement that seismic and aseismic slip sum to match long-term slip. Dynamic models of the seismic cycle based on this conceptual model can be tuned to reproduce geodetic and seismological observations. The promise and pitfalls of using such models to assess seismic hazard are discussed.

  11. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    Science.gov (United States)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  12. An Analytical Model for Assessing Stability of Pre-Existing Faults in Caprock Caused by Fluid Injection and Extraction in a Reservoir

    Science.gov (United States)

    Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin

    2016-07-01

    Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.

  13. Effect of Pore Pressure on Slip Failure of an Impermeable Fault: A Coupled Micro Hydro-Geomechanical Model

    Science.gov (United States)

    Yang, Z.; Juanes, R.

    2015-12-01

    The geomechanical processes associated with subsurface fluid injection/extraction is of central importance for many industrial operations related to energy and water resources. However, the mechanisms controlling the stability and slip motion of a preexisting geologic fault remain poorly understood and are critical for the assessment of seismic risk. In this work, we develop a coupled hydro-geomechanical model to investigate the effect of fluid injection induced pressure perturbation on the slip behavior of a sealing fault. The model couples single-phase flow in the pores and mechanics of the solid phase. Granular packs (see example in Fig. 1a) are numerically generated where the grains can be either bonded or not, depending on the degree of cementation. A pore network is extracted for each granular pack with pore body volumes and pore throat conductivities calculated rigorously based on geometry of the local pore space. The pore fluid pressure is solved via an explicit scheme, taking into account the effect of deformation of the solid matrix. The mechanics part of the model is solved using the discrete element method (DEM). We first test the validity of the model with regard to the classical one-dimensional consolidation problem where an analytical solution exists. We then demonstrate the ability of the coupled model to reproduce rock deformation behavior measured in triaxial laboratory tests under the influence of pore pressure. We proceed to study the fault stability in presence of a pressure discontinuity across the impermeable fault which is implemented as a plane with its intersected pore throats being deactivated and thus obstructing fluid flow (Fig. 1b, c). We focus on the onset of shear failure along preexisting faults. We discuss the fault stability criterion in light of the numerical results obtained from the DEM simulations coupled with pore fluid flow. The implication on how should faults be treated in a large-scale continuum model is also presented.

  14. Faults, fluids and friction : effect of pressure solution and phyllosilicates on fault slip behaviour, with implications for crustal rheology

    NARCIS (Netherlands)

    Bos, B.

    2000-01-01

    In order to model the mechanics of motion and earthquake generation on large crustal fault zones, a quantitative description of the rheology of fault zones is prerequisite. In the past decades, crustal strength has been modeled using a brittle or frictional failure law to represent fault slip

  15. Simulation of Co-Seismic Off-Fault Stress Effects: Influence of Fault Roughness and Pore Pressure Coupling

    Science.gov (United States)

    Fälth, B.; Lund, B.; Hökmark, H.

    2017-12-01

    Aiming at improved safety assessment of geological nuclear waste repositories, we use dynamic 3D earthquake simulations to estimate the potential for co-seismic off-fault distributed fracture slip. Our model comprises a 12.5 x 8.5 km strike-slip fault embedded in a full space continuum where we apply a homogeneous initial stress field. In the reference case (Case 1) the fault is planar and oriented optimally for slip, given the assumed stress field. To examine the potential impact of fault roughness, we also study cases where the fault surface has undulations with self-similar fractal properties. In both the planar and the undulated cases the fault has homogeneous frictional properties. In a set of ten rough fault models (Case 2), the fault friction is equal to that of Case 1, meaning that these models generate lower seismic moments than Case 1. In another set of ten rough fault models (Case 3), the fault dynamic friction is adjusted such that seismic moments on par with that of Case 1 are generated. For the propagation of the earthquake rupture we adopt the linear slip-weakening law and obtain Mw 6.4 in Case 1 and Case 3, and Mw 6.3 in Case 2 (35 % lower moment than Case 1). During rupture we monitor the off-fault stress evolution along the fault plane at 250 m distance and calculate the corresponding evolution of the Coulomb Failure Stress (CFS) on optimally oriented hypothetical fracture planes. For the stress-pore pressure coupling, we assume Skempton's coefficient B = 0.5 as a base case value, but also examine the sensitivity to variations of B. We observe the following: (I) The CFS values, and thus the potential for fracture slip, tend to increase with the distance from the hypocenter. This is in accordance with results by other authors. (II) The highest CFS values are generated by quasi-static stress concentrations around fault edges and around large scale fault bends, where we obtain values of the order of 10 MPa. (III) Locally, fault roughness may have a

  16. Entropy-Based Voltage Fault Diagnosis of Battery Systems for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2018-01-01

    Full Text Available The battery is a key component and the major fault source in electric vehicles (EVs. Ensuring power battery safety is of great significance to make the diagnosis more effective and predict the occurrence of faults, for the power battery is one of the core technologies of EVs. This paper proposes a voltage fault diagnosis detection mechanism using entropy theory which is demonstrated in an EV with a multiple-cell battery system during an actual operation situation. The preliminary analysis, after collecting and preprocessing the typical data periods from Operation Service and Management Center for Electric Vehicle (OSMC-EV in Beijing, shows that overvoltage fault for Li-ion batteries cell can be observed from the voltage curves. To further locate abnormal cells and predict faults, an entropy weight method is established to calculate the objective weight, which reduces the subjectivity and improves the reliability. The result clearly identifies the abnormity of cell voltage. The proposed diagnostic model can be used for EV real-time diagnosis without laboratory testing methods. It is more effective than traditional methods based on contrastive analysis.

  17. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    Science.gov (United States)

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to

  18. A Hamiltonian Approach to Fault Isolation in a Planar Vertical Take–Off and Landing Aircraft Model

    Directory of Open Access Journals (Sweden)

    Rodriguez-Alfaro Luis H.

    2015-03-01

    Full Text Available The problem of fault detection and isolation in a class of nonlinear systems having a Hamiltonian representation is considered. In particular, a model of a planar vertical take-off and landing aircraft with sensor and actuator faults is studied. A Hamiltonian representation is derived from an Euler-Lagrange representation of the system model considered. In this form, nonlinear decoupling is applied in order to obtain subsystems with (as much as possible specific fault sensitivity properties. The resulting decoupled subsystem is represented as a Hamiltonian system and observer-based residual generators are designed. The results are presented through simulations to show the effectiveness of the proposed approach.

  19. Which Fault Orientations Occur during Oblique Rifting? Combining Analog and Numerical 3d Models with Observations from the Gulf of Aden

    Science.gov (United States)

    Autin, J.; Brune, S.

    2013-12-01

    Oblique rift systems like the Gulf of Aden are intrinsically three-dimensional. In order to understand the evolution of these systems, one has to decode the fundamental mechanical similarities of oblique rifts. One way to accomplish this, is to strip away the complexity that is generated by inherited fault structures. In doing so, we assume a laterally homogeneous segment of Earth's lithosphere and ask how many different fault populations are generated during oblique extension inbetween initial deformation and final break-up. We combine results of an analog and a numerical model that feature a 3D segment of a layered lithosphere. In both cases, rift evolution is recorded quantitatively in terms of crustal fault geometries. For the numerical model, we adopt a novel post-processing method that allows to infer small-scale crustal fault orientation from the surface stress tensor. Both models involve an angle of 40 degrees between the rift normal and the extensional direction which allows comparison to the Gulf of Aden rift system. The resulting spatio-temporal fault pattern of our models shows three normal fault orientations: rift-parallel, extension-orthogonal, and intermediate, i.e. with a direction inbetween the two previous orientations. The rift evolution involves three distinct phases: (i) During the initial rift phase, wide-spread faulting with intermediate orientation occurs. (ii) Advanced lithospheric necking enables rift-parallel normal faulting at the rift flanks, while strike-slip faulting in the central part of the rift system indicates strain partitioning. (iii) During continental break-up, displacement-orthogonal as well as intermediate faults occur. We compare our results to the structural evolution of the Eastern Gulf of Aden. External parts of the rift exhibit intermediate and displacement-orthogonal faults while rift-parallel faults are present at the rift borders. The ocean-continent transition mainly features intermediate and displacement

  20. Research on Model-Based Fault Diagnosis for a Gas Turbine Based on Transient Performance

    Directory of Open Access Journals (Sweden)

    Detang Zeng

    2018-01-01

    Full Text Available It is essential to monitor and to diagnose faults in rotating machinery with a high thrust–weight ratio and complex structure for a variety of industrial applications, for which reliable signal measurements are required. However, the measured values consist of the true values of the parameters, the inertia of measurements, random errors and systematic errors. Such signals cannot reflect the true performance state and the health state of rotating machinery accurately. High-quality, steady-state measurements are necessary for most current diagnostic methods. Unfortunately, it is hard to obtain these kinds of measurements for most rotating machinery. Diagnosis based on transient performance is a useful tool that can potentially solve this problem. A model-based fault diagnosis method for gas turbines based on transient performance is proposed in this paper. The fault diagnosis consists of a dynamic simulation model, a diagnostic scheme, and an optimization algorithm. A high-accuracy, nonlinear, dynamic gas turbine model using a modular modeling method is presented that involves thermophysical properties, a component characteristic chart, and system inertial. The startup process is simulated using this model. The consistency between the simulation results and the field operation data shows the validity of the model and the advantages of transient accumulated deviation. In addition, a diagnostic scheme is designed to fulfill this process. Finally, cuckoo search is selected to solve the optimization problem in fault diagnosis. Comparative diagnostic results for a gas turbine before and after washing indicate the improved effectiveness and accuracy of the proposed method of using data from transient processes, compared with traditional methods using data from the steady state.

  1. Influence of fault asymmetric dislocation on the gravity changes

    Directory of Open Access Journals (Sweden)

    Duan Hurong

    2014-08-01

    Full Text Available A fault is a planar fracture or discontinuity in a volume of rock, across which there has been significant displacement along the fractures as a result of earth movement. Large faults within the Earth’s crust result from the action of plate tectonic forces, with the largest forming the boundaries between the plates, energy release associated with rapid movement on active faults is the cause of most earthquakes. The relationship between unevenness dislocation and gravity changes was studied on the theoretical thought of differential fault. Simulated observation values were adopted to deduce the gravity changes with the model of asymmetric fault and the model of Okada, respectively. The characteristic of unevennes fault momentum distribution is from two end points to middle by 0 according to a certain continuous functional increase. However, the fault momentum distribution in the fault length range is a constant when the Okada model is adopted. Numerical simulation experiments for the activities of the strike-slip fault, dip-slip fault and extension fault were carried out, respectively, to find that both the gravity contours and the gravity variation values are consistent when either of the two models is adopted. The apparent difference lies in that the values at the end points are 17. 97% for the strike-slip fault, 25. 58% for the dip-slip fault, and 24. 73% for the extension fault.

  2. Fault diagnosis for engine air path with neural models and classifier ...

    African Journals Online (AJOL)

    A new FDI scheme is developed for automotive engines in this paper. The method uses an independent radial basis function (RBF) neural ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults considered are 10-20% changes ...

  3. Using an Earthquake Simulator to Model Tremor Along a Strike Slip Fault

    Science.gov (United States)

    Cochran, E. S.; Richards-Dinger, K. B.; Kroll, K.; Harrington, R. M.; Dieterich, J. H.

    2013-12-01

    We employ the earthquake simulator, RSQSim, to investigate the conditions under which tremor occurs in the transition zone of the San Andreas fault. RSQSim is a computationally efficient method that uses rate- and state- dependent friction to simulate a wide range of event sizes for long time histories of slip [Dieterich and Richards-Dinger, 2010; Richards-Dinger and Dieterich, 2012]. RSQSim has been previously used to investigate slow slip events in Cascadia [Colella et al., 2011; 2012]. Earthquakes, tremor, slow slip, and creep occurrence are primarily controlled by the rate and state constants a and b and slip speed. We will report the preliminary results of using RSQSim to vary fault frictional properties in order to better understand rupture dynamics in the transition zone using observed characteristics of tremor along the San Andreas fault. Recent studies of tremor along the San Andreas fault provide information on tremor characteristics including precise locations, peak amplitudes, duration of tremor episodes, and tremor migration. We use these observations to constrain numerical simulations that examine the slip conditions in the transition zone of the San Andreas Fault. Here, we use the earthquake simulator, RSQSim, to conduct multi-event simulations of tremor for a strike slip fault modeled on Cholame section of the San Andreas fault. Tremor was first observed on the San Andreas fault near Cholame, California near the southern edge of the 2004 Parkfield rupture [Nadeau and Dolenc, 2005]. Since then, tremor has been observed across a 150 km section of the San Andreas with depths between 16-28 km and peak amplitudes that vary by a factor of 7 [Shelly and Hardebeck, 2010]. Tremor episodes, comprised of multiple low frequency earthquakes (LFEs), tend to be relatively short, lasting tens of seconds to as long as 1-2 hours [Horstmann et al., in review, 2013]; tremor occurs regularly with some tremor observed almost daily [Shelly and Hardebeck, 2010; Horstmann

  4. Transient Analysis of Grid-Connected Wind Turbines with DFIG After an External Short-Circuit Fault

    DEFF Research Database (Denmark)

    Sun, Tao; Chen, Zhe; Blaabjerg, Frede

    2004-01-01

    The fast development of wind power generation brings new requirements for wind turbine integration to the network. After the clearance of an external short-circuit fault, the grid-connected wind turbine should restore its normal operation with minimized power losses. This paper concentrates...... on transient analysis of variable speed wind turbines with doubly fed induction generator (DFIG) after an external short-circuit fault. A simulation model of a MW-level variable speed wind turbine with DFIG developed in PSCAD/EMTDC is presented, and the control and protection schemes are described in detail....... After the clearance of an external short-circuit fault the control schemes manage to restore the wind turbine?s normal operation, and their performances are demonstrated by simulation results both during the fault and after the clearance of the fault....

  5. Faults, fluids and friction : Effect of pressure solution and phyllosilicates on fault slip behaviour, with implications for crustal rheology

    NARCIS (Netherlands)

    Bos, B.

    2000-01-01

    In order to model the mechanics of motion and earthquake generation on large crustal fault zones, a quantitative description of the rheology of fault zones is prerequisite. In the past decades, crustal strength has been modeled using a brittle or frictional failure law to represent fault slip at

  6. Fault tree graphics

    International Nuclear Information System (INIS)

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  7. Thermal-hydraulic modeling of deaerator and fault detection and diagnosis of measurement sensor

    International Nuclear Information System (INIS)

    Lee, Jung Woon; Park, Jae Chang; Kim, Jung Taek; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In

    2003-05-01

    It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the deaerator and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm using Adaptive Estimator(AE) algorithm. The other is input-output model-based FDD algorithm using ART neural network. Extensive computer simulations for the real data obtained from Younggwang 3 and 4 FSAR are carried out to evaluate the performance in terms of speed and accuracy

  8. An architecture for fault tolerant controllers

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2005-01-01

    degradation in the sense of guaranteed degraded performance. A number of fault diagnosis problems, fault tolerant control problems, and feedback control with fault rejection problems are formulated/considered, mainly from a fault modeling point of view. The method is illustrated on a servo example including......A general architecture for fault tolerant control is proposed. The architecture is based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBK parameterization to quantify the performance of the fault tolerant system. The approach suggested can be applied...

  9. Case-Based Fault Diagnostic System

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2014-01-01

    Nowadays, case-based fault diagnostic (CBFD) systems have become important and widely applied problem solving technologies. They are based on the assumption that “similar faults have similar diagnosis”. On the other hand, CBFD systems still suffer from some limitations. Common ones of them are: (1) failure of CBFD to have the needed diagnosis for the new faults that have no similar cases in the case library. (2) Limited memorization when increasing the number of stored cases in the library. The proposed research introduces incorporating the neural network into the case based system to enable the system to diagnose all the faults. Neural networks have proved their success in the classification and diagnosis problems. The suggested system uses the neural network to diagnose the new faults (cases) that cannot be diagnosed by the traditional CBR diagnostic system. Besides, the proposed system can use the another neural network to control adding and deleting the cases in the library to manage the size of the cases in the case library. However, the suggested system has improved the performance of the case based fault diagnostic system when applied for the motor rolling bearing as a case of study

  10. Resistivity structure of Sumatran Fault (Aceh segment) derived from 1-D magnetotelluric modeling

    Science.gov (United States)

    Nurhasan, Sutarno, D.; Bachtiar, H.; Sugiyanto, D.; Ogawa, Y.; Kimata, F.; Fitriani, D.

    2012-06-01

    Sumatran Fault Zone is the most active fault in Indonesia as a result of strike-slip component of Indo-Australian oblique convergence. With the length of 1900 km, Sumatran fault was divided into 20 segments starting from the southernmost Sumatra Island having small slip rate and increasing to the north end of Sumatra Island. There are several geophysical methods to analyze fault structure depending on physical parameter used in these methods, such as seismology, geodesy and electromagnetic. Magnetotelluric method which is one of geophysical methods has been widely used in mapping and sounding resistivity distribution because it does not only has the ability for detecting contras resistivity but also has a penetration range up to hundreds of kilometers. Magnetotelluric survey was carried out in Aceh region with the 12 total sites crossing Sumatran Fault on Aceh and Seulimeum segments. Two components of electric and magnetic fields were recorded during 10 hours in average with the frequency range from 320 Hz to 0,01 Hz. Analysis of the pseudosection of phase and apparent resistivity exhibit vertical low phase flanked on the west and east by high phase describing the existence of resistivity contras in this region. Having rotated the data to N45°E direction, interpretation of the result has been performed using three different methods of 1D MT modeling i.e. Bostick inversion, 1D MT inversion of TM data, and 1D MT inversion of the impedance determinant. By comparison, we concluded that the use of TM data only and the impedance determinant in 1D inversion yield the more reliable resistivity structure of the fault compare to other methods. Based on this result, it has been shown clearly that Sumatra Fault is characterized by vertical contras resistivity indicating the existence of Aceh and Seulimeum faults which has a good agreement with the geological data.

  11. Data-driven fault mechanics: Inferring fault hydro-mechanical properties from in situ observations of injection-induced aseismic slip

    Science.gov (United States)

    Bhattacharya, P.; Viesca, R. C.

    2017-12-01

    In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent

  12. Faults architecture and growth in clay-limestone alternation. Examples in the S-E Basin alternations (France) and numerical modeling

    International Nuclear Information System (INIS)

    Roche, Vincent

    2011-01-01

    The following work has been carried out in the framework of the studies conducted by IRSN in support of its safety evaluation of the geological disposal programme of high and intermediate level, long-lived radioactive waste. Such a disposal is planned to be hosted by the Callovian-Oxfordian indurate clay formation between two limestone formations in eastern Paris basin, France. Hypothetical faults may cross-cut this layered section, decreasing the clay containment ability by creating preferential pathways for radioactive solute towards limestones. This study aims at characterising the fault architecture and the normal fault growth in clay/limestone layered sections. Structural analysis and displacement profiles have been carried out in normal faults crossing several decimetres to metre thick sedimentary alternations in the South-Eastern Basin (France) and petrophysical properties have been determined for each layer. The studied faults are simple fault planes or complex fault zones showing are significantly controlled by the layering. The analysis of the fault characteristics and the results obtained on numerical models enlighten several processes such as fault nucleation, fault restriction, and fault growth through layered section. Some studied faults nucleated in the limestone layers, without using pre-existing fractures such as joints, and according to our numerical analysis, a strong stiffness, a low strength contrast between the limestone and the clay layer, and/or s a greater thickness of the clay layer are conditions which favour nucleation of faults in limestone. The range of mechanical properties leading to the fault nucleation in one layer type or another was investigated using a 3D modelling approach. After its nucleation, the fault propagates within a homogeneous medium with a constant displacement gradient until its vertical propagation is stopped by a restrictor. The evidenced restrictors are limestone-clay interfaces or faults in clays, sub

  13. Implementation of fuzzy modeling system for faults detection and diagnosis in three phase induction motor drive system

    Directory of Open Access Journals (Sweden)

    Shorouk Ossama Ibrahim

    2015-05-01

    Full Text Available Induction motors have been intensively utilized in industrial applications, mainly due to their efficiency and reliability. It is necessary that these machines work all the time with its high performance and reliability. So it is necessary to monitor, detect and diagnose different faults that these motors are facing. In this paper an intelligent fault detection and diagnosis for different faults of induction motor drive system is introduced. The stator currents and the time are introduced as inputs to the proposed fuzzy detection and diagnosis system. The direct torque control technique (DTC is adopted as a suitable control technique in the drive system especially, in traction applications, such as Electric Vehicles and Sub-Way Metro that used such a machine. An intelligent modeling technique is adopted as an identifier for different faults; the proposed model introduces the time as an important factor or variable that plays an important role either in fault detection or in decision making for suitable corrective action according to the type of the fault. Experimental results have been obtained to verify the efficiency of the proposed intelligent detector and identifier; a matching between the simulated and experimental results has been noticed.

  14. Fault Locating, Prediction and Protection (FLPPS)

    Energy Technology Data Exchange (ETDEWEB)

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  15. Statistical fault detection in photovoltaic systems

    KAUST Repository

    Garoudja, Elyes

    2017-05-08

    Faults in photovoltaic (PV) systems, which can result in energy loss, system shutdown or even serious safety breaches, are often difficult to avoid. Fault detection in such systems is imperative to improve their reliability, productivity, safety and efficiency. Here, an innovative model-based fault-detection approach for early detection of shading of PV modules and faults on the direct current (DC) side of PV systems is proposed. This approach combines the flexibility, and simplicity of a one-diode model with the extended capacity of an exponentially weighted moving average (EWMA) control chart to detect incipient changes in a PV system. The one-diode model, which is easily calibrated due to its limited calibration parameters, is used to predict the healthy PV array\\'s maximum power coordinates of current, voltage and power using measured temperatures and irradiances. Residuals, which capture the difference between the measurements and the predictions of the one-diode model, are generated and used as fault indicators. Then, the EWMA monitoring chart is applied on the uncorrelated residuals obtained from the one-diode model to detect and identify the type of fault. Actual data from the grid-connected PV system installed at the Renewable Energy Development Center, Algeria, are used to assess the performance of the proposed approach. Results show that the proposed approach successfully monitors the DC side of PV systems and detects temporary shading.

  16. Late quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    International Nuclear Information System (INIS)

    Brogan, G.E.; Kellogg, K.S.; Terhune, C.L.; Slemmons, D.B.

    1991-01-01

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest- trending pull-apart basin

  17. Fault Diagnosis and Fault Tolerant Control with Application on a Wind Turbine Low Speed Shaft Encoder

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Sardi, Hector Eloy Sanchez; Escobet, Teressa

    2015-01-01

    tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

  18. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    Science.gov (United States)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  19. Seismic Hazard Analysis on a Complex, Interconnected Fault Network

    Science.gov (United States)

    Page, M. T.; Field, E. H.; Milner, K. R.

    2017-12-01

    In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.

  20. Active Fault-Tolerant Control for Wind Turbine with Simultaneous Actuator and Sensor Faults

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available The purpose of this paper is to show a novel fault-tolerant tracking control (FTC strategy with robust fault estimation and compensating for simultaneous actuator sensor faults. Based on the framework of fault-tolerant control, developing an FTC design method for wind turbines is a challenge and, thus, they can tolerate simultaneous pitch actuator and pitch sensor faults having bounded first time derivatives. The paper’s key contribution is proposing a descriptor sliding mode method, in which for establishing a novel augmented descriptor system, with which we can estimate the state of system and reconstruct fault by designing descriptor sliding mode observer, the paper introduces an auxiliary descriptor state vector composed by a system state vector, actuator fault vector, and sensor fault vector. By the optimized method of LMI, the conditions for stability that estimated error dynamics are set up to promote the determination of the parameters designed. With this estimation, and designing a fault-tolerant controller, the system’s stability can be maintained. The effectiveness of the design strategy is verified by implementing the controller in the National Renewable Energy Laboratory’s 5-MW nonlinear, high-fidelity wind turbine model (FAST and simulating it in MATLAB/Simulink.

  1. An integrated model for the assessment of unmitigated fault events in ITER's superconducting magnets

    Energy Technology Data Exchange (ETDEWEB)

    McIntosh, S., E-mail: simon.mcintosh@ccfe.ac.uk [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Holmes, A. [Marcham Scientific Ltd., Sarum House, 10 Salisbury Rd., Hungerford RG17 0LH, Berkshire (United Kingdom); Cave-Ayland, K.; Ash, A.; Domptail, F.; Zheng, S.; Surrey, E.; Taylor, N. [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Hamada, K.; Mitchell, N. [ITER Organization, Magnet Division, St Paul Lez Durance Cedex (France)

    2016-11-01

    A large amount of energy is stored in ITER superconducting magnet system. Faults which initiate a discharge are typically mitigated to quickly transfer away the stored magnetic energy for dissipation through a bank of resistors. In an extreme unlikely occurrence, an unmitigated fault event represents a potentially severe discharge of energy into the coils and the surrounding structure. A new simulation tool has been developed for the detailed study of these unmitigated fault events. The tool integrates: the propagation of multiple quench fronts initiated by an initial fault or by subsequent coil heating; the 3D convection and conduction of heat through the magnet structure; the 3D conduction of current and Ohmic heating both along the conductor and via alternate pathways generated by arcing or material melt. Arcs linking broken sections of conductor or separate turns are simulated with a new unconstrained arc model to balance electrical current paths and heat generation within the arc column in the multi-physics model. The influence under the high Lorenz forces present is taken into account. Simulation results for an unmitigated fault in a poloidal field coil are presented.

  2. Thermo-Hydro-Micro-Mechanical 3D Modeling of a Fault Gouge During Co-seismic Slip

    Science.gov (United States)

    Papachristos, E.; Stefanou, I.; Sulem, J.; Donze, F. V.

    2017-12-01

    A coupled Thermo-Hydro-Micro-Mechanical (THMM) model based on the Discrete Elements method (DEM) is presented for studying the evolving fault gouge properties during pre- and co-seismic slip. Modeling the behavior of the fault gouge at the microscale is expected to improve our understanding on the various mechanisms that lead to slip weakening and finally control the transition from aseismic to seismic slip.The gouge is considered as a granular material of spherical particles [1]. Upon loading, the interactions between particles follow a frictional behavior and explicit dynamics. Using regular triangulation, a pore network is defined by the physical pore space between the particles. The network is saturated by a compressible fluid, and flow takes place following Stoke's equations. Particles' movement leads to pore deformation and thus to local pore pressure increase. Forces exerted from the fluid onto the particles are calculated using mid-step velocities. The fluid forces are then added to the contact forces resulting from the mechanical interactions before the next step.The same semi-implicit, two way iterative coupling is used for the heat-exchange through conduction.Simple tests have been performed to verify the model against analytical solutions and experimental results. Furthermore, the model was used to study the effect of temperature on the evolution of effective stress in the system and to highlight the role of thermal pressurization during seismic slip [2, 3].The analyses are expected to give grounds for enhancing the current state-of-the-art constitutive models regarding fault friction and shed light on the evolution of fault zone propertiesduring seismic slip.[1] Omid Dorostkar, Robert A Guyer, Paul A Johnson, Chris Marone, and Jan Carmeliet. On the role of fluids in stick-slip dynamics of saturated granular fault gouge using a coupled computational fluid dynamics-discrete element approach. Journal of Geophysical Research: Solid Earth, 122

  3. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    Science.gov (United States)

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  4. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.; Jonsson, Sigurjon; Sudhaus, H.; Baumann, C.

    2012-01-01

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due

  5. Three-Dimensional Growth of Flexural Slip Fault-Bend and Fault-Propagation Folds and Their Geomorphic Expression

    Directory of Open Access Journals (Sweden)

    Asdrúbal Bernal

    2018-03-01

    Full Text Available The three-dimensional growth of fault-related folds is known to be an important process during the development of compressive mountain belts. However, comparatively little is known concerning the manner in which fold growth is expressed in topographic relief and local drainage networks. Here we report results from a coupled kinematic and surface process model of fault-related folding. We consider flexural slip fault-bend and fault-propagation folds that grow in both the transport and strike directions, linked to a surface process model that includes bedrock channel development and hillslope diffusion. We investigate various modes of fold growth under identical surface process conditions and critically analyse their geomorphic expression. Fold growth results in the development of steep forelimbs and gentler, wider backlimbs resulting in asymmetric drainage basin development (smaller basins on forelimbs, larger basins on backlimbs. However, topographies developed above fault-propagation folds are more symmetric than those developed above fault-bend folds as a result of their different forelimb kinematics. In addition, the surface expression of fault-bend and fault-propagation folds depends both on the slip distribution along the fault and on the style of fold growth. When along-strike plunge is a result of slip events with gently decreasing slip towards the fault tips (with or without lateral propagation, large plunge-panel drainage networks are developed at the expense of backpanel (transport-opposing and forepanel (transport-facing drainage basins. In contrast, if the fold grows as a result of slip events with similar displacements along strike, plunge-panel drainage networks are poorly developed (or are transient features of early fold growth and restricted to lateral fold terminations, particularly when the number of propagation events is small. The absence of large-scale plunge-panel drainage networks in natural examples suggests that the

  6. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    Science.gov (United States)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  7. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip.

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  8. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  9. The pulsed migration of hydrocarbons across inactive faults

    Directory of Open Access Journals (Sweden)

    S. D. Harris

    1999-01-01

    Full Text Available Geological fault zones are usually assumed to influence hydrocarbon migration either as high permeability zones which allow enhanced along- or across-fault flow or as barriers to the flow. An additional important migration process inducing along- or across-fault migration can be associated with dynamic pressure gradients. Such pressure gradients can be created by earthquake activity and are suggested here to allow migration along or across inactive faults which 'feel' the quake-related pressure changes; i.e. the migration barriers can be removed on inactive faults when activity takes place on an adjacent fault. In other words, a seal is viewed as a temporary retardation barrier which leaks when a fault related fluid pressure event enhances the buoyancy force and allows the entry pressure to be exceeded. This is in contrast to the usual model where a seal leaks because an increase in hydrocarbon column height raises the buoyancy force above the entry pressure of the fault rock. Under the new model hydrocarbons may migrate across the inactive fault zone for some time period during the earthquake cycle. Numerical models of this process are presented to demonstrate the impact of this mechanism and its role in filling traps bounded by sealed faults.

  10. Fault morphology of the lyo Fault, the Median Tectonic Line Active Fault System

    OpenAIRE

    後藤, 秀昭

    1996-01-01

    In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ~ ...

  11. Statistical fault detection in photovoltaic systems

    KAUST Repository

    Garoudja, Elyes; Harrou, Fouzi; Sun, Ying; Kara, Kamel; Chouder, Aissa; Silvestre, Santiago

    2017-01-01

    and efficiency. Here, an innovative model-based fault-detection approach for early detection of shading of PV modules and faults on the direct current (DC) side of PV systems is proposed. This approach combines the flexibility, and simplicity of a one-diode model

  12. Rupture Complexity Promoted by Damaged Fault Zones in Earthquake Cycle Models

    Science.gov (United States)

    Idini, B.; Ampuero, J. P.

    2017-12-01

    Pulse-like ruptures tend to be more sensitive to stress heterogeneity than crack-like ones. For instance, a stress-barrier can more easily stop the propagation of a pulse than that of a crack. While crack-like ruptures tend to homogenize the stress field within their rupture area, pulse-like ruptures develop heterogeneous stress fields. This feature of pulse-like ruptures can potentially lead to complex seismicity with a wide range of magnitudes akin to the Gutenberg-Richter law. Previous models required a friction law with severe velocity-weakening to develop pulses and complex seismicity. Recent dynamic rupture simulations show that the presence of a damaged zone around a fault can induce pulse-like rupture, even under a simple slip-weakening friction law, although the mechanism depends strongly on initial stress conditions. Here we aim at testing if fault zone damage is a sufficient ingredient to generate complex seismicity. In particular, we investigate the effects of damaged fault zones on the emergence and sustainability of pulse-like ruptures throughout multiple earthquake cycles, regardless of initial conditions. We consider a fault bisecting a homogeneous low-rigidity layer (the damaged zone) embedded in an intact medium. We conduct a series of earthquake cycle simulations to investigate the effects of two fault zone properties: damage level D and thickness H. The simulations are based on classical rate-and-state friction, the quasi-dynamic approximation and the software QDYN (https://github.com/ydluo/qdyn). Selected fully-dynamic simulations are also performed with a spectral element method. Our numerical results show the development of complex rupture patterns in some damaged fault configurations, including events of different sizes, as well as pulse-like, multi-pulse and hybrid pulse-crack ruptures. We further apply elasto-static theory to assess how D and H affect ruptures with constant stress drop, in particular the flatness of their slip profile

  13. Rich Interfaces for Dependability: Compositional Methods for Dynamic Fault Trees and Arcade models

    NARCIS (Netherlands)

    Boudali, H.; Crouzen, Pepijn; Haverkort, Boudewijn R.H.M.; Kuntz, G.W.M.; Stoelinga, Mariëlle Ida Antoinette

    This paper discusses two behavioural interfaces for reliability analysis: dynamic fault trees, which model the system reliability in terms of the reliability of its components and Arcade, which models the system reliability at an architectural level. For both formalisms, the reliability is analyzed

  14. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  15. Considering the potential effect of faulting on regional-scale groundwater flow: an illustrative example from Australia's Great Artesian Basin

    Science.gov (United States)

    Smerdon, Brian D.; Turnadge, Chris

    2015-08-01

    Hydraulic head measurements in the Great Artesian Basin (GAB), Australia, began in the early 20th century, and despite subsequent decades of data collection, a well-accepted smoothed potentiometric surface has continually assumed a contiguous aquifer system. Numerical modeling was used to produce alternative potentiometric surfaces for the Cadna-owie-Hooray aquifers with and without the effect of major faults. Where a fault created a vertical offset between the aquifers and was juxtaposed with an aquitard, it was assumed to act as a lateral barrier to flow. Results demonstrate notable differences in the central portion of the study area between potentiometric surfaces including faults and those without faults. Explicitly considering faults results in a 25-50 m difference where faults are perpendicular to the regional flow path, compared to disregarding faults. These potential barriers create semi-isolated compartments where lateral groundwater flow may be diminished or absent. Groundwater management in the GAB relies on maintaining certain hydraulic head conditions and, hence, a potentiometric surface. The presence of faulting has two implications for management: (1) a change in the inferred hydraulic heads (and associated fluxes) at the boundaries of regulatory jurisdictions; and (2) assessment of large-scale extractions occurring at different locations within the GAB.

  16. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    -block’ dilemma is stated for the recent geodynamics of faults in view of interpretations of monitoring results. The matter is that either a block is an active element generating anomalous recent deformation and a fault is a ‘passive’ element, or a fault zone itself is a source of anomalous displacements and blocks are passive elements, i.e. host medium. ‘Paradoxes’ of high and low strain velocities are explainable under the concept that the anomalous recent geodynamics is caused by parametric excitation of deformation processes in fault zones in conditions of a quasi-static regime of loading.Based on empirical data, it is revealed that recent deformation processes migrate in fault zones both in space and time. Two types of waves, ‘inter-fault’ and ‘intra-fault’, are described. A phenomenological model of auto-wave deformation processes is proposed; the model is consistent with monitoring data. A definition of ‘pseudo-wave’ is introduced. Arrangements to establish a system for monitoring deformation auto-waves are described.When applied to geological deformation monitoring, new measurement technologies are associated with result identification problems, including ‘ratios of uncertainty’ such as ‘anomaly’s dimensions – density of monitoring stations’ and ‘anomaly’s duration – details of measurements in time’. It is shown that the RSA interferometry method does not provide for an unambiguous determination of ground surface displacement vectors. 

  17. Testing Pixel Translation Digital Elevation Models to Reconstruct Slip Histories: An Example from the Agua Blanca Fault, Baja California, Mexico

    Science.gov (United States)

    Wilson, J.; Wetmore, P. H.; Malservisi, R.; Ferwerda, B. P.; Teran, O.

    2012-12-01

    We use recently collected slip vector and total offset data from the Agua Blanca fault (ABF) to constrain a pixel translation digital elevation model (DEM) to reconstruct the slip history of this fault. This model was constructed using a Perl script that reads a DEM file (Easting, Northing, Elevation) and a configuration file with coordinates that define the boundary of each fault segment. A pixel translation vector is defined as a magnitude of lateral offset in an azimuthal direction. The program translates pixels north of the fault and prints their pre-faulting position to a new DEM file that can be gridded and displayed. This analysis, where multiple DEMs are created with different translation vectors, allows us to identify areas of transtension or transpression while seeing the topographic expression in these areas. The benefit of this technique, in contrast to a simple block model, is that the DEM gives us a valuable graphic which can be used to pose new research questions. We have found that many topographic features correlate across the fault, i.e. valleys and ridges, which likely have implications for the age of the ABF, long term landscape evolution rates, and potentially provide conformation for total slip assessments The ABF of northern Baja California, Mexico is an active, dextral strike slip fault that transfers Pacific-North American plate boundary strain out of the Gulf of California and around the "Big Bend" of the San Andreas Fault. Total displacement on the ABF in the central and eastern parts of the fault is 10 +/- 2 km based on offset Early-Cretaceous features such as terrane boundaries and intrusive bodies (plutons and dike swarms). Where the fault bifurcates to the west, the northern strand (northern Agua Blanca fault or NABF) is constrained to 7 +/- 1 km. We have not yet identified piercing points on the southern strand, the Santo Tomas fault (STF), but displacement is inferred to be ~4 km assuming that the sum of slip on the NABF and STF is

  18. Data Driven Fault Tolerant Control : A Subspace Approach

    NARCIS (Netherlands)

    Dong, J.

    2009-01-01

    The main stream research on fault detection and fault tolerant control has been focused on model based methods. As far as a model is concerned, changes therein due to faults have to be extracted from measured data. Generally speaking, existing approaches process measured inputs and outputs either by

  19. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    Science.gov (United States)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  20. Fault detection in multiply-redundant measurement systems via sequential testing

    International Nuclear Information System (INIS)

    Ray, A.

    1988-01-01

    The theory and application of a sequential test procedure for fault detection and isolation. The test procedure is suited for development of intelligent instrumentation in strategic processes like aircraft and nuclear plants where redundant measurements are usually available for individual critical variables. The test procedure consists of: (1) a generic redundancy management procedure which is essentially independent of the fault detection strategy and measurement noise statistics, and (2) a modified version of sequential probability ratio test algorithm for fault detection and isolation, which functions within the framework of this redundancy management procedure. The sequential test procedure is suitable for real-time applications using commercially available microcomputers and its efficacy has been verified by online fault detection in an operating nuclear reactor. 15 references

  1. A comparison between rate-and-state friction and microphysical models, based on numerical simulations of fault slip

    Science.gov (United States)

    van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.

    2018-05-01

    Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.

  2. A 3D resistivity model derived from the transient electromagnetic data observed on the Araba fault, Jordan

    Science.gov (United States)

    Rödder, A.; Tezkan, B.

    2013-01-01

    72 inloop transient electromagnetic soundings were carried out on two 2 km long profiles perpendicular and two 1 km and two 500 m long profiles parallel to the strike direction of the Araba fault in Jordan which is the southern part of the Dead Sea transform fault indicating the boundary between the African and Arabian continental plates. The distance between the stations was on average 50 m. The late time apparent resistivities derived from the induced voltages show clear differences between the stations located at the eastern and at the western part of the Araba fault. The fault appears as a boundary between the resistive western (ca. 100 Ωm) and the conductive eastern part (ca. 10 Ωm) of the survey area. On profiles parallel to the strike late time apparent resistivities were almost constant as well in the time dependence as in lateral extension at different stations, indicating a 2D resistivity structure of the investigated area. After having been processed, the data were interpreted by conventional 1D Occam and Marquardt inversion. The study using 2D synthetic model data showed, however, that 1D inversions of stations close to the fault resulted in fictitious layers in the subsurface thus producing large interpretation errors. Therefore, the data were interpreted by a 2D forward resistivity modeling which was then extended to a 3D resistivity model. This 3D model explains satisfactorily the time dependences of the observed transients at nearly all stations.

  3. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    Science.gov (United States)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of Abort Triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of Abort Triggers.

  4. An Ensemble Deep Convolutional Neural Network Model with Improved D-S Evidence Fusion for Bearing Fault Diagnosis.

    Science.gov (United States)

    Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang; Hu, Jianjun

    2017-07-28

    Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster-Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions.

  5. A Fault Diagnosis Model Based on LCD-SVD-ANN-MIV and VPMCD for Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Songrong Luo

    2016-01-01

    Full Text Available The fault diagnosis process is essentially a class discrimination problem. However, traditional class discrimination methods such as SVM and ANN fail to capitalize the interactions among the feature variables. Variable predictive model-based class discrimination (VPMCD can adequately use the interactions. But the feature extraction and selection will greatly affect the accuracy and stability of VPMCD classifier. Aiming at the nonstationary characteristics of vibration signal from rotating machinery with local fault, singular value decomposition (SVD technique based local characteristic-scale decomposition (LCD was developed to extract the feature variables. Subsequently, combining artificial neural net (ANN and mean impact value (MIV, ANN-MIV as a kind of feature selection approach was proposed to select more suitable feature variables as input vector of VPMCD classifier. In the end of this paper, a novel fault diagnosis model based on LCD-SVD-ANN-MIV and VPMCD is proposed and proved by an experimental application for roller bearing fault diagnosis. The results show that the proposed method is effective and noise tolerant. And the comparative results demonstrate that the proposed method is superior to the other methods in diagnosis speed, diagnosis success rate, and diagnosis stability.

  6. An Integrated Approach of Model checking and Temporal Fault Tree for System Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2009-10-15

    Digitalization of instruments and control systems in nuclear power plants offers the potential to improve plant safety and reliability through features such as increased hardware reliability and stability, and improved failure detection capability. It however makes the systems and their safety analysis more complex. Originally, safety analysis was applied to hardware system components and formal methods mainly to software. For software-controlled or digitalized systems, it is necessary to integrate both. Fault tree analysis (FTA) which has been one of the most widely used safety analysis technique in nuclear industry suffers from several drawbacks as described in. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA.

  7. Density of oxidation-induced stacking faults in damaged silicon

    NARCIS (Netherlands)

    Kuper, F.G.; Hosson, J.Th.M. De; Verwey, J.F.

    1986-01-01

    A model for the relation between density and length of oxidation-induced stacking faults on damaged silicon surfaces is proposed, based on interactions of stacking faults with dislocations and neighboring stacking faults. The model agrees with experiments.

  8. Data-driven technology for engineering systems health management design approach, feature construction, fault diagnosis, prognosis, fusion and decisions

    CERN Document Server

    Niu, Gang

    2017-01-01

    This book introduces condition-based maintenance (CBM)/data-driven prognostics and health management (PHM) in detail, first explaining the PHM design approach from a systems engineering perspective, then summarizing and elaborating on the data-driven methodology for feature construction, as well as feature-based fault diagnosis and prognosis. The book includes a wealth of illustrations and tables to help explain the algorithms, as well as practical examples showing how to use this tool to solve situations for which analytic solutions are poorly suited. It equips readers to apply the concepts discussed in order to analyze and solve a variety of problems in PHM system design, feature construction, fault diagnosis and prognosis.

  9. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...

  10. Radial basis function neural network in fault detection of automotive ...

    African Journals Online (AJOL)

    Radial basis function neural network in fault detection of automotive engines. ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults ... Keywords: Automotive engine, independent RBFNN model, RBF neural network, fault detection

  11. Insights into the 3D architecture of an active caldera ring-fault at Tendürek volcano through modeling of geodetic data

    KAUST Repository

    Vasyura-Bathke, Hannes

    2015-04-28

    The three-dimensional assessment of ring-fault geometries and kinematics at active caldera volcanoes is typically limited by sparse field, geodetic or seismological data, or by only partial ring-fault rupture or slip. Here we use a novel combination of spatially dense InSAR time-series data, numerical models and sand-box experiments to determine the three-dimensional geometry and kinematics of a sub-surface ring-fault at Tendürek volcano in Turkey. The InSAR data reveal that the area within the ring-fault not only subsides, but also shows substantial westward-directed lateral movement. The models and experiments explain this as a consequence of a ‘sliding-trapdoor’ ring-fault architecture that is mostly composed of outward-inclined reverse segments, most markedly so on the volcano\\'s western flanks but includes inward-inclined normal segments on its eastern flanks. Furthermore, the model ring-fault exhibits dextral and sinistral strike-slip components that are roughly bilaterally distributed onto its northern and southern segments, respectively. Our more complex numerical model describes the deformation at Tendürek better than an analytical solution for a single rectangular dislocation in a half-space. Comparison to ring-faults defined at Glen Coe, Fernandina and Bárðarbunga calderas suggests that ‘sliding-trapdoor’ ring-fault geometries may be common in nature and should therefore be considered in geological and geophysical interpretations of ring-faults at different scales worldwide.

  12. Research on evaluation of degree of complexity of mining fault network based on GIS

    Energy Technology Data Exchange (ETDEWEB)

    Hua Zhang; Yun-jia Wang; Chuan-zhi Liu [China University of Mining and Technology, Jiangsu (China). School of Environment Science and Spatial Informatics

    2007-03-15

    A large number of spatial and attribute data are involved in coal resource evaluation. Databases are a relatively advanced data management technology, but their major defects are the poor graphic and spatial data functions, from which it is difficult to realize scientific management of evaluation data with spatial characteristics and evaluation result maps. On account of these deficiencies, the evaluation of degree of complexity of mining fault network based on a geographic information system (GIS) is proposed which integrates management of spatial and attribute data. A fractal is an index which can reflect the comprehensive information of faults' number, density, size, composition and dynamics mechanism. A fractal dimension is used as the quantitative evaluation index. Evaluation software has been developed based on a component GIS-MapX, with which the degree of complexity of fault network is evaluated quantitatively using the quantitative index of fractal dimensions in Liuqiao No.2 coal mine as an example. Results show that it is effective in acquiring model parameters and enhancing the definition of data and evaluation results with the application of GIS technology. The fault network is a system with fractal structure and its complexity can be described reasonably and accurately by fractal dimension, which provides an effective method for coal resource evaluation. 9 refs., 6 figs., 2 tabs.

  13. Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea

    Science.gov (United States)

    Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham

    2011-01-01

    The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

  14. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  15. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2018-01-01

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a

  16. Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2

    Science.gov (United States)

    2016-06-01

    specification of fault propagation in EMV2 corresponds to the Fault Propagation and Transformation Calculus (FPTC) [Paige 2009]. The following concepts...definition of security includes acci- dental malicious indication of anomalous behavior either from outside a system or by unauthor- ized crossing of a

  17. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  18. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  19. Fault tolerance of the NIF power conditioning system

    International Nuclear Information System (INIS)

    Larson, D.W.; Anderson, R.; Boyes, J.

    1995-01-01

    The tolerance of the circuit topology proposed for the National Ignition Facility (NIF) power conditioning system to specific fault conditions is investigated. A new pulsed power circuit is proposed for the NIF which is simpler and less expensive than previous ICF systems. The inherent fault modes of the new circuit are different from the conventional approach, and must be understood to ensure adequate NIF system reliability. A test-bed which simulates the NIF capacitor module design was constructed to study the circuit design. Measurements from test-bed experiments with induced faults are compared with results from a detailed circuit model. The model is validated by the measurements and used to predict the behavior of the actual NIF module during faults. The model can be used to optimize fault tolerance of the NIF module through an appropriate distribution of circuit inductance and resistance. The experimental and modeling results are presented, and fault performance is compared with the ratings of pulsed power components. Areas are identified which require additional investigation

  20. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  1. Evidence for chaotic fault interactions in the seismicity of the San Andreas fault and Nankai trough

    Science.gov (United States)

    Huang, Jie; Turcotte, D. L.

    1990-01-01

    The dynamical behavior introduced by fault interactions is examined here using a simple spring-loaded, slider-block model with velocity-weakening friction. The model consists of two slider blocks coupled to each other and to a constant-velocity driver by elastic springs. For an asymmetric system in which the frictional forces on the two blocks are not equal, the solutions exhibit chaotic behavior. The system's behavior over a range of parameter values seems to be generally analogous to that of weakly coupled segments of an active fault. Similarities between the model simulations and observed patterns of seismicity on the south central San Andreas fault in California and in the Nankai trough along the coast of southwestern Japan.

  2. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2015-01-01

    This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data to ...

  3. Mechanical evolution of transpression zones affected by fault interactions: Insights from 3D elasto-plastic finite element models

    Science.gov (United States)

    Nabavi, Seyed Tohid; Alavi, Seyed Ahmad; Mohammadi, Soheil; Ghassemi, Mohammad Reza

    2018-01-01

    The mechanical evolution of transpression zones affected by fault interactions is investigated by a 3D elasto-plastic mechanical model solved with the finite-element method. Ductile transpression between non-rigid walls implies an upward and lateral extrusion. The model results demonstrate that a, transpression zone evolves in a 3D strain field along non-coaxial strain paths. Distributed plastic strain, slip transfer, and maximum plastic strain occur within the transpression zone. Outside the transpression zone, fault slip is reduced because deformation is accommodated by distributed plastic shear. With progressive deformation, the σ3 axis (the minimum compressive stress) rotates within the transpression zone to form an oblique angle to the regional transport direction (∼9°-10°). The magnitude of displacement increases faster within the transpression zone than outside it. Rotation of the displacement vectors of oblique convergence with time suggests that transpression zone evolves toward an overall non-plane strain deformation. The slip decreases along fault segments and with increasing depth. This can be attributed to the accommodation of bulk shortening over adjacent fault segments. The model result shows an almost symmetrical domal uplift due to off-fault deformation, generating a doubly plunging fold and a 'positive flower' structure. Outside the overlap zone, expanding asymmetric basins subside to 'negative flower' structures on both sides of the transpression zone and are called 'transpressional basins'. Deflection at fault segments causes the fault dip fall to less than 90° (∼86-89°) near the surface (∼1.5 km). This results in a pure-shear-dominated, triclinic, and discontinuous heterogeneous flow of the transpression zone.

  4. Experimental testing and modelling of a resistive type superconducting fault current limiter using MgB2 wire

    International Nuclear Information System (INIS)

    Smith, A C; Pei, X; Oliver, A; Husband, M; Rindfleisch, M

    2012-01-01

    A prototype resistive superconducting fault current limiter (SFCL) was developed using single-strand round magnesium diboride (MgB 2 ) wire. The MgB 2 wire was wound with an interleaved arrangement to minimize coil inductance and provide adequate inter-turn voltage withstand capability. The temperature profile from 30 to 40 K and frequency profile from 10 to 100 Hz at 25 K were tested and reported. The quench properties of the prototype coil were tested using a high current test circuit. The fault current was limited by the prototype coil within the first quarter-cycle. The prototype coil demonstrated reliable and repeatable current limiting properties and was able to withstand a potential peak current of 372 A for one second without any degradation of performance. A three-strand SFCL coil was investigated and demonstrated scaled-up current capacity. An analytical model to predict the behaviour of the prototype single-strand SFCL coil was developed using an adiabatic boundary condition on the outer surface of the wire. The predicted fault current using the analytical model showed very good correlation with the experimental test results. The analytical model and a finite element thermal model were used to predict the temperature rise of the wire during a fault. (paper)

  5. Off-fault seismicity suggests creep below 10 km on the northern San Jacinto Fault

    Science.gov (United States)

    Cooke, M. L.; Beyer, J. L.

    2017-12-01

    Within the San Bernardino basin, CA, south of the juncture of the San Jacinto (SJF) and San Andreas faults (SAF), focal mechanisms show normal slip events that are inconsistent with the interseismic strike-slip loading of the region. High-quality (nodal plane uncertainty faults [Anderson et al., 2004]. However, the loading of these normal slip events remains enigmatic because the region is expected to have dextral loading between large earthquake events. These enigmatic normal slip events may be loaded by deep (> 10 km depth) spatially creep along the northern SJF. Steady state models show that over many earthquake cycles, the dextral slip rate on the northern SJF increases southward, placing the San Bernardino basin in extension. In the absence of recent large seismic events that could produce off-fault normal focal mechanisms in the San Bernardino basin, non-uniform deep aseismic slip on the SJF could account for this seismicity. We develop interseismic models that incorporate spatially non-uniform creep below 10 km on the SJF based on steady-state slip distribution. These model results match the pattern of deep normal slip events within the San Bernardino basin. Such deep creep on the SJF may not be detectable from the geodetic signal due to the close proximity of the SAF, whose lack of seismicity suggests that it is locked to 20 km. Interseismic models with 15 km locking depth on both faults are indistinguishable from models with 10 km locking depth on the SJF and 20 km locking depth on the SAF. This analysis suggests that the microseismicity in our multi-decadal catalog may record both the interseismic dextral loading of the region as well as off-fault deformation associated with deep aseismic creep on the northern SJF. If the enigmatic normal slip events of the San Bernardino basin are included in stress inversions from the seismic catalog used to assess seismic hazard, the results may provide inaccurate information about fault loading in this region.

  6. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  7. Bond Graph Modelling for Fault Detection and Isolation of an Ultrasonic Linear Motor

    Directory of Open Access Journals (Sweden)

    Mabrouk KHEMLICHE

    2010-12-01

    Full Text Available In this paper Bond Graph modeling, simulation and monitoring of ultrasonic linear motors are presented. Only the vibration of piezoelectric ceramics and stator will be taken into account. Contact problems between stator and rotor are not treated here. So, standing and travelling waves will be briefly presented since the majority of the motors use another wave type to generate the stator vibration and thus obtain the elliptic trajectory of the points on the surface of the stator in the first time. Then, electric equivalent circuit will be presented with the aim for giving a general idea of another way of graphical modelling of the vibrator introduced and developed. The simulations of an ultrasonic linear motor are then performed and experimental results on a prototype built at the laboratory are presented. Finally, validation of the Bond Graph method for modelling is carried out, comparing both simulation and experiment results. This paper describes the application of the FDI approach to an electrical system. We demonstrate the FDI effectiveness with real data collected from our automotive test. We introduce the analysis of the problem involved in the faults localization in this process. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new approaches to the complex system control.

  8. Multi-Fault Rupture Scenarios in the Brawley Seismic Zone

    Science.gov (United States)

    Kyriakopoulos, C.; Oglesby, D. D.; Rockwell, T. K.; Meltzner, A. J.; Barall, M.

    2017-12-01

    Dynamic rupture complexity is strongly affected by both the geometric configuration of a network of faults and pre-stress conditions. Between those two, the geometric configuration is more likely to be anticipated prior to an event. An important factor in the unpredictability of the final rupture pattern of a group of faults is the time-dependent interaction between them. Dynamic rupture models provide a means to investigate this otherwise inscrutable processes. The Brawley Seismic Zone in Southern California is an area in which this approach might be important for inferring potential earthquake sizes and rupture patterns. Dynamic modeling can illuminate how the main faults in this area, the Southern San Andreas (SSAF) and Imperial faults, might interact with the intersecting cross faults, and how the cross faults may modulate rupture on the main faults. We perform 3D finite element modeling of potential earthquakes in this zone assuming an extended array of faults (Figure). Our results include a wide range of ruptures and fault behaviors depending on assumptions about nucleation location, geometric setup, pre-stress conditions, and locking depth. For example, in the majority of our models the cross faults do not strongly participate in the rupture process, giving the impression that they are not typically an aid or an obstacle to the rupture propagation. However, in some cases, particularly when rupture proceeds slowly on the main faults, the cross faults indeed can participate with significant slip, and can even cause rupture termination on one of the main faults. Furthermore, in a complex network of faults we should not preclude the possibility of a large event nucleating on a smaller fault (e.g. a cross fault) and eventually promoting rupture on the main structure. Recent examples include the 2010 Mw 7.1 Darfield (New Zealand) and Mw 7.2 El Mayor-Cucapah (Mexico) earthquakes, where rupture started on a smaller adjacent segment and later cascaded into a larger

  9. Modeling earthquake sequences along the Manila subduction zone: Effects of three-dimensional fault geometry

    Science.gov (United States)

    Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan

    2018-05-01

    To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.

  10. Analysis and implementation of power managment and control strategy for six-phase multilevel ac drive system in fault condition

    DEFF Research Database (Denmark)

    Sanjeevikumar, P.; Grandi, Gabriele; Blaabjerg, Frede

    2016-01-01

    sources. The developed post-fault algorithm is applied when there is a fault by one VSI and the load is fed from the remaining three healthy VSIs. In faulty conditions the multilevel outputs are reduced from 3-level to 2-level, but still the system propagates with degraded power. Numerical simulation......This research article exploits the power management algorithm in post-fault conditions for a six-phase (quad) multilevel inverter. The drive circuit consists of four 2-level, three-phase voltage source inverter (VSI) supplying a six-phase open-end windings motor or/impedance load......, with circumstantial failure of one VSI investigated. A simplified level-shifted pulse-width modulation (PWM) algorithm is developed to modulate each couple of three-phase VSI as 3-level output voltage generators in normal operation. The total power of the whole ac drive is shared equally among the four isolated DC...

  11. Sensor Fault Diagnosis Observer for an Electric Vehicle Modeled as a Takagi-Sugeno System

    Directory of Open Access Journals (Sweden)

    S. Gómez-Peñate

    2018-01-01

    Full Text Available A sensor fault diagnosis of an electric vehicle (EV modeled as a Takagi-Sugeno (TS system is proposed. The proposed TS model considers the nonlinearity of the longitudinal velocity of the vehicle and parametric variation induced by the slope of the road; these considerations allow to obtain a mathematical model that represents the vehicle for a wide range of speeds and different terrain conditions. First, a virtual sensor represented by a TS state observer is developed. Sufficient conditions are given by a set of linear matrix inequalities (LMIs that guarantee asymptotic convergence of the TS observer. Second, the work is extended to perform fault detection and isolation based on a generalized observer scheme (GOS. Numerical simulations are presented to show the performance and applicability of the proposed method.

  12. Landslide susceptibility mapping for a part of North Anatolian Fault Zone (Northeast Turkey) using logistic regression model

    Science.gov (United States)

    Demir, Gökhan; aytekin, mustafa; banu ikizler, sabriye; angın, zekai

    2013-04-01

    The North Anatolian Fault is know as one of the most active and destructive fault zone which produced many earthquakes with high magnitudes. Along this fault zone, the morphology and the lithological features are prone to landsliding. However, many earthquake induced landslides were recorded by several studies along this fault zone, and these landslides caused both injuiries and live losts. Therefore, a detailed landslide susceptibility assessment for this area is indispancable. In this context, a landslide susceptibility assessment for the 1445 km2 area in the Kelkit River valley a part of North Anatolian Fault zone (Eastern Black Sea region of Turkey) was intended with this study, and the results of this study are summarized here. For this purpose, geographical information system (GIS) and a bivariate statistical model were used. Initially, Landslide inventory maps are prepared by using landslide data determined by field surveys and landslide data taken from General Directorate of Mineral Research and Exploration. The landslide conditioning factors are considered to be lithology, slope gradient, slope aspect, topographical elevation, distance to streams, distance to roads and distance to faults, drainage density and fault density. ArcGIS package was used to manipulate and analyze all the collected data Logistic regression method was applied to create a landslide susceptibility map. Landslide susceptibility maps were divided into five susceptibility regions such as very low, low, moderate, high and very high. The result of the analysis was verified using the inventoried landslide locations and compared with the produced probability model. For this purpose, Area Under Curvature (AUC) approach was applied, and a AUC value was obtained. Based on this AUC value, the obtained landslide susceptibility map was concluded as satisfactory. Keywords: North Anatolian Fault Zone, Landslide susceptibility map, Geographical Information Systems, Logistic Regression Analysis.

  13. FAULT TOLERANCE IN MOBILE GRID COMPUTING

    OpenAIRE

    Aghila Rajagopal; M.A. Maluk Mohamed

    2014-01-01

    This paper proposes a novel model for Surrogate Object based paradigm in mobile grid environment for achieving a Fault Tolerance. Basically Mobile Grid Computing Model focuses on Service Composition and Resource Sharing Process. In order to increase the performance of the system, Fault Recovery plays a vital role. In our Proposed System for Recovery point, Surrogate Object Based Checkpoint Recovery Model is introduced. This Checkpoint Recovery model depends on the Surrogate Object and the Fau...

  14. Computer aided construction of fault tree

    International Nuclear Information System (INIS)

    Kovacs, Z.

    1982-01-01

    Computer code CAT for the automatic construction of the fault tree is briefly described. Code CAT makes possible simple modelling of components using decision tables, it accelerates the fault tree construction process, constructs fault trees of different complexity, and is capable of harmonized co-operation with programs PREPandKITT 1,2 for fault tree analysis. The efficiency of program CAT and thus the accuracy and completeness of fault trees constructed significantly depends on the compilation and sophistication of decision tables. Currently, program CAT is used in co-operation with programs PREPandKITT 1,2 in reliability analyses of nuclear power plant systems. (B.S.)

  15. Seismic variability of subduction thrust faults: Insights from laboratory models

    Science.gov (United States)

    Corbi, F.; Funiciello, F.; Faccenna, C.; Ranalli, G.; Heuret, A.

    2011-06-01

    Laboratory models are realized to investigate the role of interface roughness, driving rate, and pressure on friction dynamics. The setup consists of a gelatin block driven at constant velocity over sand paper. The interface roughness is quantified in terms of amplitude and wavelength of protrusions, jointly expressed by a reference roughness parameter obtained by their product. Frictional behavior shows a systematic dependence on system parameters. Both stick slip and stable sliding occur, depending on driving rate and interface roughness. Stress drop and frequency of slip episodes vary directly and inversely, respectively, with the reference roughness parameter, reflecting the fundamental role for the amplitude of protrusions. An increase in pressure tends to favor stick slip. Static friction is a steeply decreasing function of the reference roughness parameter. The velocity strengthening/weakening parameter in the state- and rate-dependent dynamic friction law becomes negative for specific values of the reference roughness parameter which are intermediate with respect to the explored range. Despite the simplifications of the adopted setup, which does not address the problem of off-fault fracturing, a comparison of the experimental results with the depth distribution of seismic energy release along subduction thrust faults leads to the hypothesis that their behavior is primarily controlled by the depth- and time-dependent distribution of protrusions. A rough subduction fault at shallow depths, unable to produce significant seismicity because of low lithostatic pressure, evolves into a moderately rough, velocity-weakening fault at intermediate depths. The magnitude of events in this range is calibrated by the interplay between surface roughness and subduction rate. At larger depths, the roughness further decreases and stable sliding becomes gradually more predominant. Thus, although interplate seismicity is ultimately controlled by tectonic parameters (velocity of

  16. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  17. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  18. A dynamic integrated fault diagnosis method for power transformers.

    Science.gov (United States)

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  19. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    Science.gov (United States)

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  20. Process plant alarm diagnosis using synthesised fault tree knowledge

    International Nuclear Information System (INIS)

    Trenchard, A.J.

    1990-01-01

    The development of computer based tools, to assist process plant operators in their task of fault/alarm diagnosis, has received much attention over the last twenty five years. More recently, with the emergence of Artificial Intelligence (AI) technology, the research activity in this subject area has heightened. As a result, there are a great variety of fault diagnosis methodologies, using many different approaches to represent the fault propagation behaviour of process plant. These range in complexity from steady state quantitative models to more abstract definitions of the relationships between process alarms. Unfortunately, very few of the techniques have been tried and tested on process plant and even fewer have been judged to be commercial successes. One of the outstanding problems still remains the time and effort required to understand and model the fault propagation behaviour of each considered process. This thesis describes the development of an experimental knowledge based system (KBS) to diagnose process plant faults, as indicated by process variable alarms. In an attempt to minimise the modelling effort, the KBS has been designed to infer diagnoses using a fault tree representation of the process behaviour, generated using an existing fault tree synthesis package (FAULTFINDER). The process is described to FAULTFINDER as a configuration of unit models, derived from a standard model library or by tailoring existing models. The resultant alarm diagnosis methodology appears to work well for hard (non-rectifying) faults, but is likely to be less robust when attempting to diagnose intermittent faults and transient behaviour. The synthesised fault trees were found to contain the bulk of the information required for the diagnostic task, however, this needed to be augmented with extra information in certain circumstances. (author)

  1. Coulomb Stress Accumulation along the San Andreas Fault System

    Science.gov (United States)

    Smith, Bridget; Sandwell, David

    2003-01-01

    Stress accumulation rates along the primary segments of the San Andreas Fault system are computed using a three-dimensional (3-D) elastic half-space model with realistic fault geometry. The model is developed in the Fourier domain by solving for the response of an elastic half-space due to a point vector body force and analytically integrating the force from a locking depth to infinite depth. This approach is then applied to the San Andreas Fault system using published slip rates along 18 major fault strands of the fault zone. GPS-derived horizontal velocity measurements spanning the entire 1700 x 200 km region are then used to solve for apparent locking depth along each primary fault segment. This simple model fits remarkably well (2.43 mm/yr RMS misfit), although some discrepancies occur in the Eastern California Shear Zone. The model also predicts vertical uplift and subsidence rates that are in agreement with independent geologic and geodetic estimates. In addition, shear and normal stresses along the major fault strands are used to compute Coulomb stress accumulation rate. As a result, we find earthquake recurrence intervals along the San Andreas Fault system to be inversely proportional to Coulomb stress accumulation rate, in agreement with typical coseismic stress drops of 1 - 10 MPa. This 3-D deformation model can ultimately be extended to include both time-dependent forcing and viscoelastic response.

  2. EKF-based fault detection for guided missiles flight control system

    Science.gov (United States)

    Feng, Gang; Yang, Zhiyong; Liu, Yongjin

    2017-03-01

    The guided missiles flight control system is essential for guidance accuracy and kill probability. It is complicated and fragile. Since actuator faults and sensor faults could seriously affect the security and reliability of the system, fault detection for missiles flight control system is of great significance. This paper deals with the problem of fault detection for the closed-loop nonlinear model of the guided missiles flight control system in the presence of disturbance. First, set up the fault model of flight control system, and then design the residual generation based on the extended Kalman filter (EKF) for the Eulerian-discrete fault model. After that, the Chi-square test was selected for the residual evaluation and the fault detention task for guided missiles closed-loop system was accomplished. Finally, simulation results are provided to illustrate the effectiveness of the approach proposed in the case of elevator fault separately.

  3. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    International Nuclear Information System (INIS)

    Cumbest, R.J.

    2000-01-01

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

  4. Estimation of the statistical distribution of faulting in selected areas and the design of an exploration model to detect these faults. Final research report

    International Nuclear Information System (INIS)

    Brooke, J.P.

    1977-11-01

    Selected sites in the United States have been analyzed geomathematically as a part of the technical support program to develop site suitability criteria for High Level Nuclear Waste (HLW) repositories. Using published geological maps and other information, statistical evaluations of the fault patterns and other significant geological features have been completed for 16 selected localities. The observed frequency patterns were compared to theoretical patterns in order to obtain a predictive model for faults at each location. In general, the patterns approximate an exponential distribution function with the exception of Edinburgh, Scotland--the control area. The fault pattern of rocks at Edinburgh closely approximate a negative binominal frequency distribution. The range of fault occurrences encountered during the investigation varied from a low of 0.15 to a high of 10 faults per square mile. Faulting is only one factor in the overall geological evaluation of HLW sites. A general exploration program plan to aid in investigating HLW respository sites has been completed using standard mineral exploration techniques. For the preliminary examination of the suitability of potential sites, present economic conditions indicate the scanning and reconnaissance exploration stages will cost approximately $1,000,000. These would proceed in a logical sequence so that the site selected optimizes the geological factors. The reconnaissance stage of mineral exploration normally utilizes ''saturation geophysics'' to obtain complete geological information. This approach is recommended in the preliminary HLW site investigation process as the most economical and rewarding. Exploration games have been designed for potential sites in the eastern and the western U.S. The game matrix approach is recommended as a suitable technique for the allocation of resources in a search problem during this preliminary phase

  5. Fault-Tree Modeling of Safety-Critical Network Communication in a Digitalized Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hun; Kang, Hyun Gook [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve technical self-reliance for nuclear I and C systems in Korea, the Advanced Power Reactor 1400 (APR-1400) man-machine interface system (MMIS) architecture was developed by the Korea Atomic Energy Research Institute (KAERI). As one of the systems in the developed MMIS architecture, the Engineered Safety Feature-Component Control System (ESF-CCS) employs a network communication system for the transmission of safety-critical information from group controllers (GCs) to loop controllers (LCs) to effectively accommodate the vast number of field controllers. The developed fault-tree model was then applied to several case studies. As an example of the development of a fault-tree model for ESF-CCS signal failure, the fault-tree model of ESF-CCS signal failure for CS pump PP01A in the CSAS condition was designed by considering the identified hazardous states of network failure that would result in a failure to provide input signals to the corresponding LC. The quantitative results for four case studies demonstrated that the probability of overall network communication failure, which was calculated as the sum of the failure probability associated with each failure cause, contributes up to 1.88% of the probability of ESF-CCS signal failure for the CS pump considered in the case studies.

  6. The Hanford Site's Gable Mountain structure: A comparison of the recurrence of design earthquakes based on fault slip rates and a probabilistic exposure model

    International Nuclear Information System (INIS)

    Rohay, A.C.

    1991-01-01

    Gable Mountain is a segment of the Umtanum Ridge-Gable Mountain structural trend, an east-west trending series of anticlines, one of the major geologic structures on the Hanford Site. A probabilistic seismic exposure model indicates that Gable Mountain and two adjacent segments contribute significantly to the seismic hazard at the Hanford Site. Geologic measurements of the uplift of initially horizontal (11-12 Ma) basalt flows indicate that a broad, continuous, primary anticline grew at an average rate of 0.009-0.011 mm/a, and narrow, segmented, secondary anticlines grew at rates of 0.009 mm/a at Gable Butte and 0.018 mm/a at Gable Mountain. The buried Southeast Anticline appears to have a different geometry, consisting of a single, intermediate-width anticline with an estimated growth rate of 0.007 mm/a. The recurrence rate and maximum magnitude of earthquakes for the fault models were used to estimate the fault slip rate for each of the fault models and to determine the implied structural growth rate of the segments. The current model for Gable Mountain-Gable Butte predicts 0.004 mm/a of vertical uplift due to primary faulting and 0.008 mm/a due to secondary faulting. These rates are roughly half the structurally estimated rates for Gable Mountain, but the model does not account for the smaller secondary fold at Gable Butte. The model predicted an uplift rate for the Southeast Anticline of 0.006 mm/a, caused by the low open-quotes fault capabilityclose quotes weighting rather than a different fault geometry. The effects of previous modifications to the fault models are examined and potential future modifications are suggested. For example, the earthquake recurrence relationship used in the current exposure model has a b-value of 1.15, compared to a previous value of 0.85. This increases the implied deformation rates due to secondary fault models, and therefore supports the use of this regionally determined b-value to this fault/fold system

  7. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  8. Observer-Based and Regression Model-Based Detection of Emerging Faults in Coal Mills

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Lin, Bao; Jørgensen, Sten Bay

    2006-01-01

    In order to improve the reliability of power plants it is important to detect fault as fast as possible. Doing this it is interesting to find the most efficient method. Since modeling of large scale systems is time consuming it is interesting to compare a model-based method with data driven ones....

  9. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  10. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    Science.gov (United States)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  11. Fault Severity Evaluation and Improvement Design for Mechanical Systems Using the Fault Injection Technique and Gini Concordance Measure

    Directory of Open Access Journals (Sweden)

    Jianing Wu

    2014-01-01

    Full Text Available A new fault injection and Gini concordance based method has been developed for fault severity analysis for multibody mechanical systems concerning their dynamic properties. The fault tree analysis (FTA is employed to roughly identify the faults needed to be considered. According to constitution of the mechanical system, the dynamic properties can be achieved by solving the equations that include many types of faults which are injected by using the fault injection technique. Then, the Gini concordance is used to measure the correspondence between the performance with faults and under normal operation thereby providing useful hints of severity ranking in subsystems for reliability design. One numerical example and a series of experiments are provided to illustrate the application of the new method. The results indicate that the proposed method can accurately model the faults and receive the correct information of fault severity. Some strategies are also proposed for reliability improvement of the spacecraft solar array.

  12. Fault slip and earthquake recurrence along strike-slip faults — Contributions of high-resolution geomorphic data

    KAUST Repository

    Zielke, Olaf

    2015-01-01

    Understanding earthquake (EQ) recurrence relies on information about the timing and size of past EQ ruptures along a given fault. Knowledge of a fault\\'s rupture history provides valuable information on its potential future behavior, enabling seismic hazard estimates and loss mitigation. Stratigraphic and geomorphic evidence of faulting is used to constrain the recurrence of surface rupturing EQs. Analysis of the latter data sets culminated during the mid-1980s in the formulation of now classical EQ recurrence models, now routinely used to assess seismic hazard. Within the last decade, Light Detection and Ranging (lidar) surveying technology and other high-resolution data sets became increasingly available to tectono-geomorphic studies, promising to contribute to better-informed models of EQ recurrence and slip-accumulation patterns. After reviewing motivation and background, we outline requirements to successfully reconstruct a fault\\'s offset accumulation pattern from geomorphic evidence. We address sources of uncertainty affecting offset measurement and advocate approaches to minimize them. A number of recent studies focus on single-EQ slip distributions and along-fault slip accumulation patterns. We put them in context with paleoseismic studies along the respective faults by comparing coefficients of variation CV for EQ inter-event time and slip-per-event and find that a) single-event offsets vary over a wide range of length-scales and the sources for offset variability differ with length-scale, b) at fault-segment length-scales, single-event offsets are essentially constant, c) along-fault offset accumulation as resolved in the geomorphic record is dominated by essentially same-size, large offset increments, and d) there is generally no one-to-one correlation between the offset accumulation pattern constrained in the geomorphic record and EQ occurrence as identified in the stratigraphic record, revealing the higher resolution and preservation potential of

  13. A rate-state model for aftershocks triggered by dislocation on a rectangular fault: a review and new insights

    Directory of Open Access Journals (Sweden)

    F. Catalli

    2006-06-01

    Full Text Available We compute the static displacement, stress, strain and the Coulomb failure stress produced in an elastic medium by a finite size rectangular fault after its dislocation with uniform stress drop but a non uniform dislocation on the source. The time-dependent rate of triggered earthquakes is estimated by a rate-state model applied to a uniformly distributed population of faults whose equilibrium is perturbated by a stress change caused only by the first dislocation. The rate of triggered events in our simulations is exponentially proportional to the shear stress change, but the time at which the maximum rate begins to decrease is variable from fractions of hour for positive stress changes of the order of some MPa, up to more than a year for smaller stress changes. As a consequence, the final number of triggered events is proportional to the shear stress change. The model predicts that the total number of events triggered on a plane containing the fault is proportional to the 2/3 power of the seismic moment. Indeed, the total number of aftershocks produced on the fault plane scales in magnitude, M, as 10M. Including the negative contribution of the stress drop inside the source, we observe that the number of events inhibited on the fault is, at long term, nearly identical to the number of those induced outside, representing a sort of conservative natural rule. Considering its behavior in time, our model does not completely match the popular Omori law; in fact it has been shown that the seismicity induced closely to the fault edges is intense but of short duration, while that expected at large distances (up to some tens times the fault dimensions exhibits a much slower decay.

  14. Identification of active fault using analysis of derivatives with vertical second based on gravity anomaly data (Case study: Seulimeum fault in Sumatera fault system)

    Science.gov (United States)

    Hududillah, Teuku Hafid; Simanjuntak, Andrean V. H.; Husni, Muhammad

    2017-07-01

    Gravity is a non-destructive geophysical technique that has numerous application in engineering and environmental field like locating a fault zone. The purpose of this study is to spot the Seulimeum fault system in Iejue, Aceh Besar (Indonesia) by using a gravity technique and correlate the result with geologic map and conjointly to grasp a trend pattern of fault system. An estimation of subsurface geological structure of Seulimeum fault has been done by using gravity field anomaly data. Gravity anomaly data which used in this study is from Topex that is processed up to Free Air Correction. The step in the Next data processing is applying Bouger correction and Terrin Correction to obtain complete Bouger anomaly that is topographically dependent. Subsurface modeling is done using the Gav2DC for windows software. The result showed a low residual gravity value at a north half compared to south a part of study space that indicated a pattern of fault zone. Gravity residual was successfully correlate with the geologic map that show the existence of the Seulimeum fault in this study space. The study of earthquake records can be used for differentiating the active and non active fault elements, this gives an indication that the delineated fault elements are active.

  15. Experimental investigation into the fault response of superconducting hybrid electric propulsion electrical power system to a DC rail to rail fault

    Science.gov (United States)

    Nolan, S.; Jones, C. E.; Munro, R.; Norman, P.; Galloway, S.; Venturumilli, S.; Sheng, J.; Yuan, W.

    2017-12-01

    Hybrid electric propulsion aircraft are proposed to improve overall aircraft efficiency, enabling future rising demands for air travel to be met. The development of appropriate electrical power systems to provide thrust for the aircraft is a significant challenge due to the much higher required power generation capacity levels and complexity of the aero-electrical power systems (AEPS). The efficiency and weight of the AEPS is critical to ensure that the benefits of hybrid propulsion are not mitigated by the electrical power train. Hence it is proposed that for larger aircraft (~200 passengers) superconducting power systems are used to meet target power densities. Central to the design of the hybrid propulsion AEPS is a robust and reliable electrical protection and fault management system. It is known from previous studies that the choice of protection system may have a significant impact on the overall efficiency of the AEPS. Hence an informed design process which considers the key trades between choice of cable and protection requirements is needed. To date the fault response of a voltage source converter interfaced DC link rail to rail fault in a superconducting power system has only been investigated using simulation models validated by theoretical values from the literature. This paper will present the experimentally obtained fault response for a variety of different types of superconducting tape for a rail to rail DC fault. The paper will then use these as a platform to identify key trades between protection requirements and cable design, providing guidelines to enable future informed decisions to optimise hybrid propulsion electrical power system and protection design.

  16. Fault Tree Analysis with Temporal Gates and Model Checking Technique for Qualitative System Safety Analysis

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2010-01-01

    Fault tree analysis (FTA) has suffered from several drawbacks such that it uses only static gates and hence can not capture dynamic behaviors of the complex system precisely, and it is in lack of rigorous semantics, and reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and time-consuming for the complex systems while it has been one of the most widely used safety analysis technique in nuclear industry. Although several attempts have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA

  17. Fault tolerant control schemes using integral sliding modes

    CERN Document Server

    Hamayun, Mirza Tariq; Alwi, Halim

    2016-01-01

    The key attribute of a Fault Tolerant Control (FTC) system is its ability to maintain overall system stability and acceptable performance in the face of faults and failures within the feedback system. In this book Integral Sliding Mode (ISM) Control Allocation (CA) schemes for FTC are described, which have the potential to maintain close to nominal fault-free performance (for the entire system response), in the face of actuator faults and even complete failures of certain actuators. Broadly an ISM controller based around a model of the plant with the aim of creating a nonlinear fault tolerant feedback controller whose closed-loop performance is established during the design process. The second approach involves retro-fitting an ISM scheme to an existing feedback controller to introduce fault tolerance. This may be advantageous from an industrial perspective, because fault tolerance can be introduced without changing the existing control loops. A high fidelity benchmark model of a large transport aircraft is u...

  18. Stability of faults with heterogeneous friction properties and effective normal stress

    Science.gov (United States)

    Luo, Yingdi; Ampuero, Jean-Paul

    2018-05-01

    Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework

  19. Synthetic seismicity for the San Andreas fault

    Directory of Open Access Journals (Sweden)

    S. N. Ward

    1994-06-01

    Full Text Available Because historical catalogs generally span only a few repetition intervals of major earthquakes, they do not provide much constraint on how regularly earthquakes recur. In order to obtain better recurrence statistics and long-term probability estimates for events M ? 6 on the San Andreas fault, we apply a seismicity model to this fault. The model is based on the concept of fault segmentation and the physics of static dislocations which allow for stress transfer between segments. Constraints are provided by geological and seismological observations of segment lengths, characteristic magnitudes and long-term slip rates. Segment parameters slightly modified from the Working Group on California Earthquake Probabilities allow us to reproduce observed seismicity over four orders of magnitude. The model yields quite irregular earthquake recurrence patterns. Only the largest events (M ? 7.5 are quasi-periodic; small events cluster. Both the average recurrence time and the aperiodicity are also a function of position along the fault. The model results are consistent with paleoseismic data for the San Andreas fault as well as a global set of historical and paleoseismic recurrence data. Thus irregular earthquake recurrence resulting from segment interaction is consistent with a large range of observations.

  20. Thermodynamic modeling of the stacking fault energy of austenitic steels

    International Nuclear Information System (INIS)

    Curtze, S.; Kuokkala, V.-T.; Oikari, A.; Talonen, J.; Haenninen, H.

    2011-01-01

    The stacking fault energies (SFE) of 10 austenitic steels were determined in the temperature range 50 ≤ T ≤ 600 K by thermodynamic modeling of the Fe-Cr-Ni-Mn-Al-Si-Cu-C-N system using a modified Olson and Cohen modeling approach (Olson GB, Cohen M. Metall Trans 1976;7A:1897 ). The applied model accounts for each element's contribution to the Gibbs energy, the first-order excess free energies, magnetic contributions and the effect of interstitial nitrogen. Experimental SFE values from X-ray diffraction measurements were used for comparison. The effect of SFE on deformation mechanisms was also studied by electron backscatter diffraction.

  1. Design of a fault diagnosis system for next generation nuclear power plants

    International Nuclear Information System (INIS)

    Zhao, K.; Upadhyaya, B.R.; Wood, R.T.

    2004-01-01

    A new design approach for fault diagnosis is developed for next generation nuclear power plants. In the nuclear reactor design phase, data reconciliation is used as an efficient tool to determine the measurement requirements to achieve the specified goal of fault diagnosis. In the reactor operation phase, the plant measurements are collected to estimate uncertain model parameters so that a high fidelity model can be obtained for fault diagnosis. The proposed algorithm of fault detection and isolation is able to combine the strength of first principle model based fault diagnosis and the historical data based fault diagnosis. Principal component analysis on the reconciled data is used to develop a statistical model for fault detection. The updating of the principal component model based on the most recent reconciled data is a locally linearized model around the current plant measurements, so that it is applicable to any generic nonlinear systems. The sensor fault diagnosis and process fault diagnosis are decoupled through considering the process fault diagnosis as a parameter estimation problem. The developed approach has been applied to the IRIS helical coil steam generator system to monitor the operational performance of individual steam generators. This approach is general enough to design fault diagnosis systems for the next generation nuclear power plants. (authors)

  2. Micromechanics and statistics of slipping events in a granular seismic fault model

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, L de [Department of Information Engineering and CNISM, Second University of Naples, Aversa (Italy); Ciamarra, M Pica [CNR-SPIN, Dipartimento di Scienze Fisiche, Universita di Napoli Federico II (Italy); Lippiello, E; Godano, C, E-mail: dearcangelis@na.infn.it [Department of Environmental Sciences and CNISM, Second University of Naples, Caserta (Italy)

    2011-09-15

    The stick-slip is investigated in a seismic fault model made of a confined granular system under shear stress via three dimensional Molecular Dynamics simulations. We study the statistics of slipping events and, in particular, the dependence of the distribution on model parameters. The distribution consistently exhibits two regimes: an initial power law and a bump at large slips. The initial power law decay is in agreement with the the Gutenberg-Richter law characterizing real seismic occurrence. The exponent of the initial regime is quite independent of model parameters and its value is in agreement with experimental results. Conversely, the position of the bump is solely controlled by the ratio of the drive elastic constant and the system size. Large slips also become less probable in absence of fault gouge and tend to disappear for stiff drives. A two-time force-force correlation function, and a susceptibility related to the system response to pressure changes, characterize the micromechanics of slipping events. The correlation function unveils the micromechanical changes occurring both during microslips and slips. The mechanical susceptibility encodes the magnitude of the incoming microslip. Numerical results for the cellular-automaton version of the spring block model confirm the parameter dependence observed for size distribution in the granular model.

  3. Distributed bearing fault diagnosis based on vibration analysis

    Science.gov (United States)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  4. An efficient diagnostic technique for distribution systems based on under fault voltages and currents

    Energy Technology Data Exchange (ETDEWEB)

    Campoccia, A.; Di Silvestre, M.L.; Incontrera, I.; Riva Sanseverino, E. [Dipartimento di Ingegneria Elettrica elettronica e delle Telecomunicazioni, Universita degli Studi di Palermo, viale delle Scienze, 90128 Palermo (Italy); Spoto, G. [Centro per la Ricerca Elettronica in Sicilia, Monreale, Via Regione Siciliana 49, 90046 Palermo (Italy)

    2010-10-15

    Service continuity is one of the major aspects in the definition of the quality of the electrical energy, for this reason the research in the field of faults diagnostic for distribution systems is spreading ever more. Moreover the increasing interest around modern distribution systems automation for management purposes gives faults diagnostics more tools to detect outages precisely and in short times. In this paper, the applicability of an efficient fault location and characterization methodology within a centralized monitoring system is discussed. The methodology, appropriate for any kind of fault, is based on the use of the analytical model of the network lines and uses the fundamental components rms values taken from the transient measures of line currents and voltages at the MV/LV substations. The fault location and identification algorithm, proposed by the authors and suitably restated, has been implemented on a microprocessor-based device that can be installed at each MV/LV substation. The speed and precision of the algorithm have been tested against the errors deriving from the fundamental extraction within the prescribed fault clearing times and against the inherent precision of the electronic device used for computation. The tests have been carried out using Matlab Simulink for simulating the faulted system. (author)

  5. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    Science.gov (United States)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    2D fault model, where we investigate different feedback mechanisms and their effect on seismicity evolution. We introduce an approach to estimate the state of a fault and thus its capability of generating a large (system-wide) event assuming likely heterogeneous distributions of hypocenters and stresses, respectively.

  6. The San Andreas Fault and a Strike-slip Fault on Europa

    Science.gov (United States)

    1998-01-01

    be filled in mostly by sedimentary and erosional material deposited from above. Comparisons between faults on Europa and Earth may generate ideas useful in the study of terrestrial faulting. One theory is that fault motion on Europa is induced by the pull of variable daily tides generated by Jupiter's gravitational tug on Europa. The tidal tension opens the fault; subsequent tidal stress causes it to move lengthwise in one direction. Then the tidal forces close the fault up again. This prevents the area from moving back to its original position. If it moves forward with the next daily tidal cycle, the result is a steady accumulation of these lengthwise offset motions. Unlike Europa, here on Earth, large strike-slip faults such as the San Andreas are set in motion not by tidal pull, but by plate tectonic forces from the planet's mantle. North is to the top of the picture. The Earth picture (left) shows a LandSat Thematic Mapper image acquired in the infrared (1.55 to 1.75 micrometers) by LandSat5 on Friday, October 20th 1989 at 10:21 am. The original resolution was 28.5 meters per picture element. The Europa picture (right)is centered at 66 degrees south latitude and 195 degrees west longitude. The highest resolution frames, obtained at 40 meters per picture element with a spacecraft range of less than 4200 kilometers (2600 miles), are set in the context of lower resolution regional frames obtained at 200 meters per picture element and a range of 22,000 kilometers (13,600 miles). The images were taken on September 26, 1998 by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft. The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL HTTP://www.jpl.nasa.gov/galileo/sepo

  7. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy, Helsinki (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1996-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  8. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1998-08-01

    In this chapter, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerized relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  9. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy, Helsinki (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1997-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  10. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    Science.gov (United States)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  11. A systematic fault tree analysis based on multi-level flow modeling

    International Nuclear Information System (INIS)

    Gofuku, Akio; Ohara, Ai

    2010-01-01

    The fault tree analysis (FTA) is widely applied for the safety evaluation of a large-scale and mission-critical system. Because the potential of the FTA, however, strongly depends on human skill of analyzers, problems are pointed out in (1) education and training, (2) unreliable quality, (3) necessity of expertise knowledge, and (4) update of FTA results after the reconstruction of a target system. To get rid of these problems, many techniques to systematize FTA activities by applying computer technologies have been proposed. However, these techniques only use structural information of a target system and do not use functional information that is one of important properties of an artifact. The principle of FTA is to trace comprehensively cause-effect relations from a top undesirable effect to anomaly causes. The tracing is similar to the causality estimation technique that the authors proposed to find plausible counter actions to prevent or to mitigate the undesirable behavior of plants based on the model by a functional modeling technique, Multilevel Flow Modeling (MFM). The authors have extended this systematic technique to construct a fault tree (FT). This paper presents an algorithm of systematic construction of FT based on MFM models and demonstrates the applicability of the extended technique by the FT construction result of a cooling plant of nitric acid. (author)

  12. Fault locator of an allyl chloride plant

    Directory of Open Access Journals (Sweden)

    Savković-Stevanović Jelenka B.

    2004-01-01

    Full Text Available Process safety analysis, which includes qualitative fault event identification, the relative frequency and event probability functions, as well as consequence analysis, was performed on an allye chloride plant. An event tree for fault diagnosis and cognitive reliability analysis, as well as a troubleshooting system, were developed. Fuzzy inductive reasoning illustrated the advantages compared to crisp inductive reasoning. A qualitative model forecast the future behavior of the system in the case of accident detection and then compared it with the actual measured data. A cognitive model including qualitative and quantitative information by fuzzy logic of the incident scenario was derived as a fault locator for an ally! chloride plant. The obtained results showed the successful application of cognitive dispersion modeling to process safety analysis. A fuzzy inductive reasoner illustrated good performance to discriminate between different types of malfunctions. This fault locator allowed risk analysis and the construction of a fault tolerant system. This study is the first report in the literature showing the cognitive reliability analysis method.

  13. Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System

    Science.gov (United States)

    Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.

    2006-01-01

    The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

  14. The constant failure rate model for fault tree evaluation as a tool for unit protection reliability assessment

    International Nuclear Information System (INIS)

    Vichev, S.; Bogdanov, D.

    2000-01-01

    The purpose of this paper is to introduce the fault tree analysis method as a tool for unit protection reliability estimation. The constant failure rate model applies for making reliability assessment, and especially availability assessment. For that purpose an example for unit primary equipment structure and fault tree example for simplified unit protection system is presented (author)

  15. Active fault tolerance control of a wind turbine system using an unknown input observer with an actuator fault

    Directory of Open Access Journals (Sweden)

    Li Shanzhi

    2018-03-01

    Full Text Available This paper proposes a fault tolerant control scheme based on an unknown input observer for a wind turbine system subject to an actuator fault and disturbance. Firstly, an unknown input observer for state estimation and fault detection using a linear parameter varying model is developed. By solving linear matrix inequalities (LMIs and linear matrix equalities (LMEs, the gains of the unknown input observer are obtained. The convergence of the unknown input observer is also analysed with Lyapunov theory. Secondly, using fault estimation, an active fault tolerant controller is applied to a wind turbine system. Finally, a simulation of a wind turbine benchmark with an actuator fault is tested for the proposed method. The simulation results indicate that the proposed FTC scheme is efficient.

  16. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  17. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    Science.gov (United States)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  18. Adaptive and technology-independent architecture for fault-tolerant distributed AAL solutions.

    Science.gov (United States)

    Schmidt, Michael; Obermaisser, Roman

    2018-04-01

    Today's architectures for Ambient Assisted Living (AAL) must cope with a variety of challenges like flawless sensor integration and time synchronization (e.g. for sensor data fusion) while abstracting from the underlying technologies at the same time. Furthermore, an architecture for AAL must be capable to manage distributed application scenarios in order to support elderly people in all situations of their everyday life. This encompasses not just life at home but in particular the mobility of elderly people (e.g. when going for a walk or having sports) as well. Within this paper we will introduce a novel architecture for distributed AAL solutions whose design follows a modern Microservices approach by providing small core services instead of a monolithic application framework. The architecture comprises core services for sensor integration, and service discovery while supporting several communication models (periodic, sporadic, streaming). We extend the state-of-the-art by introducing a fault-tolerance model for our architecture on the basis of a fault-hypothesis describing the fault-containment regions (FCRs) with their respective failure modes and failure rates in order to support safety-critical AAL applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Methods for Probabilistic Fault Diagnosis: An Electrical Power System Case Study

    Science.gov (United States)

    Ricks, Brian W.; Mengshoel, Ole J.

    2009-01-01

    Health management systems that more accurately and quickly diagnose faults that may occur in different technical systems on-board a vehicle will play a key role in the success of future NASA missions. We discuss in this paper the diagnosis of abrupt continuous (or parametric) faults within the context of probabilistic graphical models, more specifically Bayesian networks that are compiled to arithmetic circuits. This paper extends our previous research, within the same probabilistic setting, on diagnosis of abrupt discrete faults. Our approach and diagnostic algorithm ProDiagnose are domain-independent; however we use an electrical power system testbed called ADAPT as a case study. In one set of ADAPT experiments, performed as part of the 2009 Diagnostic Challenge, our system turned out to have the best performance among all competitors. In a second set of experiments, we show how we have recently further significantly improved the performance of the probabilistic model of ADAPT. While these experiments are obtained for an electrical power system testbed, we believe they can easily be transitioned to real-world systems, thus promising to increase the success of future NASA missions.

  20. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  1. Perspective View, San Andreas Fault

    Science.gov (United States)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour

  2. Solving fault diagnosis problems linear synthesis techniques

    CERN Document Server

    Varga, Andreas

    2017-01-01

    This book addresses fault detection and isolation topics from a computational perspective. Unlike most existing literature, it bridges the gap between the existing well-developed theoretical results and the realm of reliable computational synthesis procedures. The model-based approach to fault detection and diagnosis has been the subject of ongoing research for the past few decades. While the theoretical aspects of fault diagnosis on the basis of linear models are well understood, most of the computational methods proposed for the synthesis of fault detection and isolation filters are not satisfactory from a numerical standpoint. Several features make this book unique in the fault detection literature: Solution of standard synthesis problems in the most general setting, for both continuous- and discrete-time systems, regardless of whether they are proper or not; consequently, the proposed synthesis procedures can solve a specific problem whenever a solution exists Emphasis on the best numerical algorithms to ...

  3. Fault diagnostics of dynamic system operation using a fault tree based method

    International Nuclear Information System (INIS)

    Hurdle, E.E.; Bartlett, L.M.; Andrews, J.D.

    2009-01-01

    For conventional systems, their availability can be considerably improved by reducing the time taken to restore the system to the working state when faults occur. Fault identification can be a significant proportion of the time taken in the repair process. Having diagnosed the problem the restoration of the system back to its fully functioning condition can then take place. This paper expands the capability of previous approaches to fault detection and identification using fault trees for application to dynamically changing systems. The technique has two phases. The first phase is modelling and preparation carried out offline. This gathers information on the effects that sub-system failure will have on the system performance. Causes of the sub-system failures are developed in the form of fault trees. The second phase is application. Sensors are installed on the system to provide information about current system performance from which the potential causes can be deduced. A simple system example is used to demonstrate the features of the method. To illustrate the potential for the method to deal with additional system complexity and redundancy, a section from an aircraft fuel system is used. A discussion of the results is provided.

  4. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  5. Investigating the ancient landscape and Cenozoic drainage development of southern Yukon (Canada), through restoration modeling of the Cordilleran-scale Tintina Fault.

    Science.gov (United States)

    Hayward, N.; Jackson, L. E.; Ryan, J. J.

    2017-12-01

    This study of southern Yukon (Canada) challenges the notion that the landscape in the long-lived, tectonically active, northern Canadian Cordillera is implicitly young. The impact of Cenozoic displacement along the continental- scale Tintina Fault on the development of the Yukon River and drainage basins of central Yukon is investigated through geophysical and hydrological modeling of digital terrain model data. Regional geological evidence suggests that the age of the planation of the Yukon plateaus is at least Late Cretaceous, rather than Neogene as previously concluded, and that there has been little penetrative deformation or net incision in the region since the late Mesozoic. The Tintina Fault has been interpreted as having experienced 430 km of dextral displacement, primarily during the Eocene. However, the alignment of river channels across the fault at specific displacements, coupled with recent seismic events and related fault activity, indicate that the fault may have moved in stages over a longer time span. Topographic restoration and hydrological models show that the drainage of the Yukon River northwestward into Alaska via the ancestral Kwikhpak River was only possible at restored displacements of up to 50-55 km on the Tintina Fault. We interpret the published drainage reversals convincingly attributed to the effects of Pliocene glaciation as an overprint on earlier Yukon River reversals or diversions attributed to tectonic displacements along the Tintina Fault. At restored fault displacements of between 230 and 430 km, our models illustrate that paleo Yukon River drainage conceivably may have flowed eastward into the Atlantic Ocean via an ancestral Liard River, which was a tributary of the paleo Bell River system. The revised drainage evolution if correct requires wide-reaching reconsideration of surficial geology deposits, the flow direction and channel geometries of the region's ancient rivers, and importantly, exploration strategies of placer gold

  6. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    Science.gov (United States)

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  7. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT for Aquaculture

    Directory of Open Access Journals (Sweden)

    Yingyi Chen

    2017-01-01

    Full Text Available In the Internet of Things (IoT equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  8. Asperity-Type Potential Foreshock Sources Driven by Nucleation-Induced Creep within a Rate-and-State Fault Model

    Science.gov (United States)

    Higgins, N.; Lapusta, N.

    2016-12-01

    What physical mechanism drives the occurrence of foreshocks? Many studies have suggested that slow slip from the mainshock nucleation is a necessary ingredient for explaining foreshock observations. We explore this view, investigating asperity-type foreshock sources driven by nucleation-induced creep using rate-and-state fault models, and numerically simulatie their behavior over many rupture cycles. Inspired by the unique laboratory experiments of earthquake nucleation and rupture conducted on a meter-scale slab of granite by McLaskey and colleagues, we model potential foreshock sources as "bumps" on the fault interface by assigning a significantly higher normal compression and, in some cases, increased smoothness (lower characteristic slip) over small patches within a seismogenic fault. In order to study the mechanics of isolated patch-induced seismic events preceding the mainshock, we separate these patches sufficiently in space. The simulation results show that our rate-and-state fault model with patches of locally different properties driven by the slow nucleation of the mainshock is indeed able to produce isolated microseismicity before the mainshock. Remarkably, the stress drops of these precursory events are compatible with observations and approximately independent of the patch compression, despite the wide range of the elevated patch compression used in different simulations. We find that this unexpected property of stress drops for this type of model is due to two factors. Firstly, failure of stronger patches results in rupture further into the surrounding fault, keeping the average stress drop down. Secondly, patches close to their local nucleation size relieve a significant amount of stress via aseismic pre-slip, which also helps to keep the stress drop down. Our current work is directed towards investigating the seismic signature of such events and the potential differences with other types of microseismicity.

  9. Fault detection and fault tolerant control of a smart base isolation system with magneto-rheological damper

    International Nuclear Information System (INIS)

    Wang, Han; Song, Gangbing

    2011-01-01

    Fault detection and isolation (FDI) in real-time systems can provide early warnings for faulty sensors and actuator signals to prevent events that lead to catastrophic failures. The main objective of this paper is to develop FDI and fault tolerant control techniques for base isolation systems with magneto-rheological (MR) dampers. Thus, this paper presents a fixed-order FDI filter design procedure based on linear matrix inequalities (LMI). The necessary and sufficient conditions for the existence of a solution for detecting and isolating faults using the H ∞ formulation is provided in the proposed filter design. Furthermore, an FDI-filter-based fuzzy fault tolerant controller (FFTC) for a base isolation structure model was designed to preserve the pre-specified performance of the system in the presence of various unknown faults. Simulation and experimental results demonstrated that the designed filter can successfully detect and isolate faults from displacement sensors and accelerometers while maintaining excellent performance of the base isolation technology under faulty conditions

  10. Model-Based Sensor Placement for Component Condition Monitoring and Fault Diagnosis in Fossil Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mobed, Parham [Texas Tech Univ., Lubbock, TX (United States); Pednekar, Pratik [West Virginia Univ., Morgantown, WV (United States); Bhattacharyya, Debangsu [West Virginia Univ., Morgantown, WV (United States); Turton, Richard [West Virginia Univ., Morgantown, WV (United States); Rengaswamy, Raghunathan [Texas Tech Univ., Lubbock, TX (United States)

    2016-01-29

    Design and operation of energy producing, near “zero-emission” coal plants has become a national imperative. This report on model-based sensor placement describes a transformative two-tier approach to identify the optimum placement, number, and type of sensors for condition monitoring and fault diagnosis in fossil energy system operations. The algorithms are tested on a high fidelity model of the integrated gasification combined cycle (IGCC) plant. For a condition monitoring network, whether equipment should be considered at a unit level or a systems level depends upon the criticality of the process equipment, its likeliness to fail, and the level of resolution desired for any specific failure. Because of the presence of a high fidelity model at the unit level, a sensor network can be designed to monitor the spatial profile of the states and estimate fault severity levels. In an IGCC plant, besides the gasifier, the sour water gas shift (WGS) reactor plays an important role. In view of this, condition monitoring of the sour WGS reactor is considered at the unit level, while a detailed plant-wide model of gasification island, including sour WGS reactor and the Selexol process, is considered for fault diagnosis at the system-level. Finally, the developed algorithms unify the two levels and identifies an optimal sensor network that maximizes the effectiveness of the overall system-level fault diagnosis and component-level condition monitoring. This work could have a major impact on the design and operation of future fossil energy plants, particularly at the grassroots level where the sensor network is yet to be identified. In addition, the same algorithms developed in this report can be further enhanced to be used in retrofits, where the objectives could be upgrade (addition of more sensors) and relocation of existing sensors.

  11. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  12. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    Science.gov (United States)

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  13. Mesoscale models for stacking faults, deformation twins and martensitic transformations: Linking atomistics to continuum

    Science.gov (United States)

    Kibey, Sandeep A.

    We present a hierarchical approach that spans multiple length scales to describe defect formation---in particular, formation of stacking faults (SFs) and deformation twins---in fcc crystals. We link the energy pathways (calculated here via ab initio density functional theory, DFT) associated with formation of stacking faults and twins to corresponding heterogeneous defect nucleation models (described through mesoscale dislocation mechanics). Through the generalized Peieirls-Nabarro model, we first correlate the width of intrinsic SFs in fcc alloy systems to their nucleation pathways called generalized stacking fault energies (GSFE). We then establish a qualitative dependence of twinning tendency in fee metals and alloys---specifically, in pure Cu and dilute Cu-xAl (x= 5.0 and 8.3 at.%)---on their twin-energy pathways called the generalized planar fault energies (GPFE). We also link the twinning behavior of Cu-Al alloys to their electronic structure by determining the effect of solute Al on the valence charge density redistribution at the SF through ab initio DFT. Further, while several efforts have been undertaken to incorporate twinning for predicting stress-strain response of fcc materials, a fundamental law for critical twinning stress has not yet emerged. We resolve this long-standing issue by linking quantitatively the twin-energy pathways (GPFE) obtained via ab initio DFT to heterogeneous, dislocation-based twin nucleation models. We establish an analytical expression that quantitatively predicts the critical twinning stress in fcc metals in agreement with experiments without requiring any empiricism at any length scale. Our theory connects twinning stress to twin-energy pathways and predicts a monotonic relation between stress and unstable twin stacking fault energy revealing the physics of twinning. We further demonstrate that the theory holds for fcc alloys as well. Our theory inherently accounts for directional nature of twinning which available

  14. Fault Detection and Load Distribution for the Wind Farm Challenge

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Larsen, Jesper Abildgaard; Stoustrup, Jakob

    2014-01-01

    In this paper a fault detection system and a fault tolerant controller for a wind farm model is designed and tested. The wind farm model is taken from the wind farm challenge which is a public available challenge where a wind farm consisting of nine turbines is proposed. The goal of the challenge...... normal and faulty conditions. Thus a fault detection system and a fault tolerant controller has been designed and combined. The fault tolerant control system has then been tested and compared to the reference system and shows improvement on all measures....

  15. Model-based fault detection for generator cooling system in wind turbines using SCADA data

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Kinnaert, Michel

    2016-01-01

    In this work, an early fault detection system for the generator cooling of wind turbines is presented and tested. It relies on a hybrid model of the cooling system. The parameters of the generator model are estimated by an extended Kalman filter. The estimated parameters are then processed by an ...

  16. Estimation of Faults in DC Electrical Power System

    Science.gov (United States)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  17. Detection and Identification of Loss of Efficiency Faults of Flight Actuators

    Directory of Open Access Journals (Sweden)

    Ossmann Daniel

    2015-03-01

    Full Text Available We propose linear parameter-varying (LPV model-based approaches to the synthesis of robust fault detection and diagnosis (FDD systems for loss of efficiency (LOE faults of flight actuators. The proposed methods are applicable to several types of parametric (or multiplicative LOE faults such as actuator disconnection, surface damage, actuator power loss or stall loads. For the detection of these parametric faults, advanced LPV-model detection techniques are proposed, which implicitly provide fault identification information. Fast detection of intermittent stall loads (seen as nuisances, rather than faults is important in enhancing the performance of various fault detection schemes dealing with large input signals. For this case, a dedicated fast identification algorithm is devised. The developed FDD systems are tested on a nonlinear actuator model which is implemented in a full nonlinear aircraft simulation model. This enables the validation of the FDD system’s detection and identification characteristics under realistic conditions.

  18. How do horizontal, frictional discontinuities affect reverse fault-propagation folding?

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-09-01

    The development of new reverse faults and related folds is strongly controlled by the mechanical characteristics of the host rocks. In this study we analyze the impact of a specific kind of anisotropy, i.e. thin mechanical and frictional discontinuities, in affecting the development of reverse faults and of the associated folds using physical scaled models. We perform analog modeling introducing one or two initially horizontal, thin discontinuities above an initially blind fault dipping at 30° in one case, and 45° in another, and then compare the results with those obtained from a fully isotropic model. The experimental results show that the occurrence of thin discontinuities affects both the development and the propagation of new faults and the shape of the associated folds. New faults 1) accelerate or decelerate their propagation depending on the location of the tips with respect to the discontinuities, 2) cross the discontinuities at a characteristic angle (∼90°), and 3) produce folds with different shapes, resulting not only from the dip of the new faults but also from their non-linear propagation history. Our results may have direct impact on future kinematic models, especially those aimed to reconstruct the tectonic history of faults that developed in layered rocks or in regions affected by pre-existing faults.

  19. Fault-tolerant and QoS based Network Layer for Security Management

    Directory of Open Access Journals (Sweden)

    Mohamed Naceur Abdelkrim

    2013-07-01

    Full Text Available Wireless sensor networks have profound effects on many application fields like security management which need an immediate, fast and energy efficient route. In this paper, we define a fault-tolerant and QoS based network layer for security management of chemical products warehouse which can be classified as real-time and mission critical application. This application generate routine data packets and alert packets caused by unusual events which need a high reliability, short end to end delay and low packet loss rate constraints. After each node compute his hop count and build his neighbors table in the initialization phase, packets can be routed to the sink. We use FELGossiping protocol for routine data packets and node-disjoint multipath routing protocol for alert packets. Furthermore, we utilize the information gathering phase of FELGossiping to update the neighbors table and detect the failed nodes, and we adapt the network topology changes by rerun the initialization phase when chemical units were added or removed from the warehouse. Analysis shows that the network layer is energy efficient and can meet the QoS constraints of unusual events packets.

  20. Performance of grid connected DFIG during recurring symmetrical faults using Internal Model Controller based Enhanced Field Oriented Control

    Directory of Open Access Journals (Sweden)

    D.V.N.Ananth

    2016-06-01

    Full Text Available The modern grid rules forces DFIG to withstand and operate during single as well as multiple low voltage grid faults. The system must not lose synchronism during any type of fault for a given time period. This withstanding capacity is called low voltage ride through (LVRT. To improve performance during LVRT, enhanced field oriented control (EFOC method is adopted in rotor side converter. This method helps in improving power transfer capability during steady state and better dynamic and transient stability during abnormal conditions. In this technique, rotor flux reference change from synchronous speed to some smaller speed or zero during the fault for injecting current at the rotor slip frequency. In this process, DC-Offset component of flux is controlled beyond decomposing to a lower value during faults and maintaining it. This offset decomposition of flux will be oscillatory in conventional FOC, whereas in EFOC with internal model controller, flux can damp quickly not only for single fault but during multiple faults. This strategy can regulate stator and rotor current waveform to sinusoidal without distortion during and after fault. It has better damped torque oscillations, control in rotor speed and generator flux during and after fault. The fluctuations in DC bus voltage across capacitor are also controlled using proposed EFOC technique. The system performance with under-voltage grid fault of 30% and 60% of the rated voltage occurring at the point of common coupling during 1 to 1.25 and another fault between 1.6 to 1.85 seconds are analyzed using simulation studies.

  1. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  2. Failure mode effect analysis and fault tree analysis as a combined methodology in risk management

    Science.gov (United States)

    Wessiani, N. A.; Yoshio, F.

    2018-04-01

    There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.

  3. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  4. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  5. Predictive fault-tolerant control of an all-thruster satellite in 6-DOF motion via neural network model updating

    Science.gov (United States)

    Tavakoli, M. M.; Assadian, N.

    2018-03-01

    The problem of controlling an all-thruster spacecraft in the coupled translational-rotational motion in presence of actuators fault and/or failure is investigated in this paper. The nonlinear model predictive control approach is used because of its ability to predict the future behavior of the system. The fault/failure of the thrusters changes the mapping between the commanded forces to the thrusters and actual force/torque generated by the thruster system. Thus, the basic six degree-of-freedom kinetic equations are separated from this mapping and a set of neural networks are trained off-line to learn the kinetic equations. Then, two neural networks are attached to these trained networks in order to learn the thruster commands to force/torque mappings on-line. Different off-nominal conditions are modeled so that neural networks can detect any failure and fault, including scale factor and misalignment of thrusters. A simple model of the spacecraft relative motion is used in MPC to decrease the computational burden. However, a precise model by the means of orbit propagation including different types of perturbation is utilized to evaluate the usefulness of the proposed approach in actual conditions. The numerical simulation shows that this method can successfully control the all-thruster spacecraft with ON-OFF thrusters in different combinations of thruster fault and/or failure.

  6. Integrating cyber attacks within fault trees

    International Nuclear Information System (INIS)

    Nai Fovino, Igor; Masera, Marcelo; De Cian, Alessio

    2009-01-01

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  7. Integrating cyber attacks within fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Nai Fovino, Igor [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy)], E-mail: igor.nai@jrc.it; Masera, Marcelo [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy); De Cian, Alessio [Department of Electrical Engineering, University di Genova, Genoa (Italy)

    2009-09-15

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  8. Unknown input observer based detection of sensor faults in a wind turbine

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2010-01-01

    In this paper an unknown input observer is designed to detect three different sensor fault scenarios in a specified bench mark model for fault detection and accommodation of wind turbines. In this paper a subset of faults is dealt with, it are faults in the rotor and generator speed sensors as well...... as a converter sensor fault. The proposed scheme detects the speed sensor faults in question within the specified requirements given in the bench mark model, while the converter fault is detected but not within the required time to detect....

  9. Fault Detection and Isolation and Fault Tolerant Control of Wind Turbines Using Set-Valued Observers

    DEFF Research Database (Denmark)

    Casau, Pedro; Rosa, Paulo Andre Nobre; Tabatabaeipour, Seyed Mojtaba

    2012-01-01

    Research on wind turbine Operations & Maintenance (O&M) procedures is critical to the expansion of Wind Energy Conversion systems (WEC). In order to reduce O&M costs and increase the lifespan of the turbine, we study the application of Set-Valued Observers (SVO) to the problem of Fault Detection...... and Isolation (FDI) and Fault Tolerant Control (FTC) of wind turbines, by taking advantage of the recent advances in SVO theory for model invalidation. A simple wind turbine model is presented along with possible faulty scenarios. The FDI algorithm is built on top of the described model, taking into account...

  10. Design and Verification of Fault-Tolerant Components

    DEFF Research Database (Denmark)

    Zhang, Miaomiao; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    We present a systematic approach to design and verification of fault-tolerant components with real-time properties as found in embedded systems. A state machine model of the correct component is augmented with internal transitions that represent hypothesized faults. Also, constraints...... to model and check this design. Model checking uses concrete parameters, so we extend the result with parametric analysis using abstractions of the automata in a rigorous verification....... relatively detailed such that they can serve directly as blueprints for engineering, and yet be amenable to exhaustive verication. The approach is illustrated with a design of a triple modular fault-tolerant system that is a real case we received from our collaborators in the aerospace field. We use UPPAAL...

  11. Modelling the Small Throw Fault Effect on the Stability of a Mining Roadway and Its Verification by In Situ Investigation

    Directory of Open Access Journals (Sweden)

    Małkowski Piotr

    2017-12-01

    Full Text Available The small throw fault zones cause serious problems for mining engineers. The knowledge about the range of fractured zone around the roadway and about roadway’s contour deformations helps a lot with the right support design or its reinforcement. The paper presents the results of numerical analysis of the effect of a small throw fault zone on the convergence of the mining roadway and the extent of the fracturing induced around the roadway. The computations were performed on a dozen physical models featuring various parameters of rock mass and support for the purpose to select the settings that reflects most suitably the behavior of tectonically disturbed and undisturbed rocks around the roadway. Finally, the results of the calculations were verified by comparing them with in situ convergence measurements carried out in the maingate D-2 in the “Borynia-Zofiówka-Jastrzębie” coal mine. Based on the results of measurements it may be concluded that the rock mass displacements around a roadway section within a fault zone during a year were four times in average greater than in the section tectonically unaffected. The results of numerical calculations show that extent of the yielding zone in the roof reaches two times the throw of the fault, in the floor 3 times the throw, and horizontally approx. 1.5 to 1.8 times the width of modelled fault zone. Only a few elasto-plastic models or models with joints between the rock beds can be recommended for predicting the performance of a roadway which is within a fault zone. It is possible, using these models, to design the roadway support of sufficient load bearing capacity at the tectonically disturbed section.

  12. Fault geometry, rupture dynamics and ground motion from potential earthquakes on the North Anatolian Fault under the Sea of Marmara

    KAUST Repository

    Oglesby, David D.

    2012-03-01

    Using the 3-D finite-element method, we develop dynamic spontaneous rupture models of earthquakes on the North Anatolian Fault system in the Sea of Marmara, Turkey, considering the geometrical complexity of the fault system in this region. We find that the earthquake size, rupture propagation pattern and ground motion all strongly depend on the interplay between the initial (static) regional pre-stress field and the dynamic stress field radiated by the propagating rupture. By testing several nucleation locations, we observe that those far from an oblique normal fault stepover segment (near Istanbul) lead to large through-going rupture on the entire fault system, whereas nucleation locations closer to the stepover segment tend to produce ruptures that die out in the stepover. However, this pattern can change drastically with only a 10° rotation of the regional stress field. Our simulations also reveal that while dynamic unclamping near fault bends can produce a new mode of supershear rupture propagation, this unclamping has a much smaller effect on the speed of the peak in slip velocity along the fault. Finally, we find that the complex fault geometry leads to a very complex and asymmetric pattern of near-fault ground motion, including greatly amplified ground motion on the insides of fault bends. The ground-motion pattern can change significantly with different hypocentres, even beyond the typical effects of directivity. The results of this study may have implications for seismic hazard in this region, for the dynamics and ground motion of geometrically complex faults, and for the interpretation of kinematic inverse rupture models.

  13. Fault geometry, rupture dynamics and ground motion from potential earthquakes on the North Anatolian Fault under the Sea of Marmara

    KAUST Repository

    Oglesby, David D.; Mai, Paul Martin

    2012-01-01

    Using the 3-D finite-element method, we develop dynamic spontaneous rupture models of earthquakes on the North Anatolian Fault system in the Sea of Marmara, Turkey, considering the geometrical complexity of the fault system in this region. We find that the earthquake size, rupture propagation pattern and ground motion all strongly depend on the interplay between the initial (static) regional pre-stress field and the dynamic stress field radiated by the propagating rupture. By testing several nucleation locations, we observe that those far from an oblique normal fault stepover segment (near Istanbul) lead to large through-going rupture on the entire fault system, whereas nucleation locations closer to the stepover segment tend to produce ruptures that die out in the stepover. However, this pattern can change drastically with only a 10° rotation of the regional stress field. Our simulations also reveal that while dynamic unclamping near fault bends can produce a new mode of supershear rupture propagation, this unclamping has a much smaller effect on the speed of the peak in slip velocity along the fault. Finally, we find that the complex fault geometry leads to a very complex and asymmetric pattern of near-fault ground motion, including greatly amplified ground motion on the insides of fault bends. The ground-motion pattern can change significantly with different hypocentres, even beyond the typical effects of directivity. The results of this study may have implications for seismic hazard in this region, for the dynamics and ground motion of geometrically complex faults, and for the interpretation of kinematic inverse rupture models.

  14. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  15. A dependability modeling of software under hardware faults digitized system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, Jong Gyun

    1996-02-01

    An analytic approach to the dependability evaluation of software in the operational phase is suggested in this work with special attention to the physical fault effects on the software dependability : The physical faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory with the path decomposition micro model. The model represents an application software with a graph consisting of nodes and arcs that probabilistic ally determine the flow from node to node. Through proper transformation of nodes and arcs, the graph can be reduced to a simple two-node graph and the software failure probability is derived from this graph. This model can be extended to the software system which consists of several complete modules without modification. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system(ILS system) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system (ILS system) is the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. This modeling method is particularly attractive for medium size programs such as software used in digitized systems of

  16. Effects of deglaciation on the crustal stress field and implications for endglacial faulting: A parametric study of simple Earth and ice models

    International Nuclear Information System (INIS)

    Lund, Bjoern

    2005-03-01

    The large faults of northern Scandinavia, hundreds of kilometres long and with offsets of more than 10 m, are inferred to be the result of major earthquakes triggered by the retreating ice sheet some 9,000 years ago. In this report we have studied a number of parameters involved in quantitative modelling of glacial isostatic adjustment (GIA) in order to illustrate how they affect stress, displacement and fault stability during deglaciation. Using a variety of reference models, we have verified that our modelling approach, a finite element analysis scheme with proper adjustments for the requirements of GIA modelling, performs satisfactory. The size of the model and the density of the grid have been investigated in order to be able to perform high resolution modelling in reasonable time. This report includes studies of both the ice and earth models. We have seen that the steeper the ice edge is, the more concentrated is the deformation around the edge and consequently shear stress localizes with high magnitudes around the ice edge. The temporal evolution of height and basal extent of the ice is very important for the response of the earth model, and we have shown that the last stages of ice retreat can cause fault instability over a large lateral region. The effect on shear stress and vertical displacement by variations in Earth model parameters such as stiffness, viscosity, density, compressibility and layer thickness was investigated. More complicated geometries, such as multiple layers and lateral layer thickness variations, were also studied. We generally find that these variations have more effect on the shear stress distributions than on the vertical displacement distributions. We also note that shear stress magnitude is affected more than the spatial shape of the shear stress distribution. Fault stability during glaciation/deglaciation was investigated by two different variations on the Mohr-Coulomb failure criterion. The stability of a fault in a stress field

  17. Effects of deglaciation on the crustal stress field and implications for endglacial faulting: A parametric study of simple Earth and ice models

    Energy Technology Data Exchange (ETDEWEB)

    Lund, Bjoern [Uppsala Univ. (Sweden). Dept. of Earth Sciences

    2005-03-01

    The large faults of northern Scandinavia, hundreds of kilometres long and with offsets of more than 10 m, are inferred to be the result of major earthquakes triggered by the retreating ice sheet some 9,000 years ago. In this report we have studied a number of parameters involved in quantitative modelling of glacial isostatic adjustment (GIA) in order to illustrate how they affect stress, displacement and fault stability during deglaciation. Using a variety of reference models, we have verified that our modelling approach, a finite element analysis scheme with proper adjustments for the requirements of GIA modelling, performs satisfactory. The size of the model and the density of the grid have been investigated in order to be able to perform high resolution modelling in reasonable time. This report includes studies of both the ice and earth models. We have seen that the steeper the ice edge is, the more concentrated is the deformation around the edge and consequently shear stress localizes with high magnitudes around the ice edge. The temporal evolution of height and basal extent of the ice is very important for the response of the earth model, and we have shown that the last stages of ice retreat can cause fault instability over a large lateral region. The effect on shear stress and vertical displacement by variations in Earth model parameters such as stiffness, viscosity, density, compressibility and layer thickness was investigated. More complicated geometries, such as multiple layers and lateral layer thickness variations, were also studied. We generally find that these variations have more effect on the shear stress distributions than on the vertical displacement distributions. We also note that shear stress magnitude is affected more than the spatial shape of the shear stress distribution. Fault stability during glaciation/deglaciation was investigated by two different variations on the Mohr-Coulomb failure criterion. The stability of a fault in a stress field

  18. Diesel Engine Actuator Fault Isolation using Multiple Models Hypothesis Tests

    DEFF Research Database (Denmark)

    Bøgh, S.A.

    1994-01-01

    Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic......Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic...

  19. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    Directory of Open Access Journals (Sweden)

    Junjie Ren

    2013-01-01

    Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  20. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    Science.gov (United States)

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.